Data Management Report Transformative Change Assessment Corpus - SOD

Author
Affiliation

Rainer M. Krug

Published

April 19, 2024

Doi
Abstract

The literature search for the assessment corpus was conducted using search terms provided by the experts and refined in co-operation with the Knowldge and Data task force. The search was conducted using OpenAlex, scripted from R to use the API. Search terms for the following searches were defined: Transformative Change, Nature / Environment and additional search terms for individual chapters and sub-chapters To assess the quality of the corpus, sets of key-papers were selected by the experts to verify if these are in the corpus. These key-papers were selected per chapter / sub-chapter to ensure that the corpus is representative of each chapter.

Keywords

DMR, TCA, Assessment Corpus

DOI GitHub release GitHub commits since latest release License: CC BY 4.0

Working Title

IPBES_TCA_Corpus

Code repo

Github repository

Build No: 549

Introduction

The following terminology is used in this document:

  • Individual corpus: The corpus resulting from one search term, e.g. transformative or nature or ChX_Y
  • Assessment Corpus: The corpus resulting from the search terms transformative AND nature
  • Chapter corpus: The corpus resulting from transformative AND Nature AND ChX_Y

The following searches are conducted on Title and Abstrat only as the availability of fulltext drops in 2020. OpenAlex did “inherit” fro Microsoft Academic their initial corpus in 2021 which contained a lot of fulltext for searches. After that time, other sources had to be used which did not include fulltext for searches. To eliminate this bias, we linit the search for terms in abstract and title only.

Schematic Overview

TODO

Search Terms

Here are the search terms used in this document. They were provided by the authors, and some adaptations were done by the tsu to adopt them for OpenAlex.

Transformative Change

Show the code
cat(params$s_1_transformative_change)
(
    (
        (
            transformation
            OR transition
            OR transformative
            OR "transformative change"
        )
        OR (
            (
                shift
                OR change
            )
            AND (
                fundamental
                OR deep
                OR radical
            )
        )
    )
    AND (
        socio
        OR social
        OR politics
        OR political
        OR governance
        OR economic
        OR cultural
        OR system
        OR technological
        OR inner
        OR personal
        OR financial
        OR business
    )
)
OR (
    (
        "transformative change"
        OR "deliberate transformation"
        OR "transformative turn"
        OR transition
        OR "social-ecological change"
        OR "deep change"
        OR "fundamental alteration"
        OR "profound change"
        OR "profound transformation"
        OR "radical transformation"
        OR "transformational change"
        OR "complete change"
        OR "complete transformation"
        OR "drastic change"
        OR "in-depth transformation"
        OR "progressive change"
        OR "radical alteration"
        OR "radical change"
        OR "revolutionary change"
        OR "significant modification"
        OR "total transformation"
        OR transition
        OR pathway
        OR power
        OR agency
        OR scale
        OR leverage
        OR context
        OR process
        OR regime
        OR shift
        OR views
        OR value
        OR structure
        OR institution
        OR deliberate
        OR structural
        OR fundamental
        OR system
        OR deep
        OR radical
        OR profound
        OR drastic
        OR widespread
        OR political
        OR economical
        OR structur
        OR complete
        OR progressive
        OR revolutionary
        OR substantial
        OR significant
    )
    AND (
        transformation
        OR alteration
        OR change
        OR turn
        OR action
        OR transition
        OR shift
    )
)

Nature

Show the code
#|

cat(params$s_1_nature_environment)
biodiversity
OR marine
OR terrestrial
OR forest
OR woodland
OR grassland
OR savanna
OR shrubland
OR peatland
OR ecosystem
OR lake
OR river
OR sea
OR ocean
OR meadow
OR heathland
OR mires
OR bog
OR tundra
OR biosphere
OR desert
OR mountain
OR "natural resource"
OR estuary
OR fjord
OR fauna
OR flora
OR soil
OR "coastal waters"
OR wetland
OR freshwater
OR marshland
OR marches
OR dryland
OR seascape
OR landscape
OR coast
OR "arable land"
OR "agricultural land"
OR "natural environment"
OR "environmental resource"
OR agroforest
OR "agro-forest"
OR plantation
OR "protected areas"
OR chaparral
OR sustainable
OR environment
OR conservation
OR ecosystem
OR nature
OR planet
OR Earth
OR biosphere
OR ecological
OR "socio-ecological"
OR restoration
OR wildlife
OR landscape
OR species
OR bioeconomy
OR "resource system"
OR "coupled system"
OR nature

Assessment Corpus

Show the code
#|

cat(params$s_1_tca_corpus)
( biodiversity
OR marine
OR terrestrial
OR forest
OR woodland
OR grassland
OR savanna
OR shrubland
OR peatland
OR ecosystem
OR lake
OR river
OR sea
OR ocean
OR meadow
OR heathland
OR mires
OR bog
OR tundra
OR biosphere
OR desert
OR mountain
OR "natural resource"
OR estuary
OR fjord
OR fauna
OR flora
OR soil
OR "coastal waters"
OR wetland
OR freshwater
OR marshland
OR marches
OR dryland
OR seascape
OR landscape
OR coast
OR "arable land"
OR "agricultural land"
OR "natural environment"
OR "environmental resource"
OR agroforest
OR "agro-forest"
OR plantation
OR "protected areas"
OR chaparral
OR sustainable
OR environment
OR conservation
OR ecosystem
OR nature
OR planet
OR Earth
OR biosphere
OR ecological
OR "socio-ecological"
OR restoration
OR wildlife
OR landscape
OR species
OR bioeconomy
OR "resource system"
OR "coupled system"
OR nature ) 
AND 
( (
    (
        (
            transformation
            OR transition
            OR transformative
            OR "transformative change"
        )
        OR (
            (
                shift
                OR change
            )
            AND (
                fundamental
                OR deep
                OR radical
            )
        )
    )
    AND (
        socio
        OR social
        OR politics
        OR political
        OR governance
        OR economic
        OR cultural
        OR system
        OR technological
        OR inner
        OR personal
        OR financial
        OR business
    )
)
OR (
    (
        "transformative change"
        OR "deliberate transformation"
        OR "transformative turn"
        OR transition
        OR "social-ecological change"
        OR "deep change"
        OR "fundamental alteration"
        OR "profound change"
        OR "profound transformation"
        OR "radical transformation"
        OR "transformational change"
        OR "complete change"
        OR "complete transformation"
        OR "drastic change"
        OR "in-depth transformation"
        OR "progressive change"
        OR "radical alteration"
        OR "radical change"
        OR "revolutionary change"
        OR "significant modification"
        OR "total transformation"
        OR transition
        OR pathway
        OR power
        OR agency
        OR scale
        OR leverage
        OR context
        OR process
        OR regime
        OR shift
        OR views
        OR value
        OR structure
        OR institution
        OR deliberate
        OR structural
        OR fundamental
        OR system
        OR deep
        OR radical
        OR profound
        OR drastic
        OR widespread
        OR political
        OR economical
        OR structur
        OR complete
        OR progressive
        OR revolutionary
        OR substantial
        OR significant
    )
    AND (
        transformation
        OR alteration
        OR change
        OR turn
        OR action
        OR transition
        OR shift
    )
) )

Chapter 1

01

Show the code
#|

cat(params$s_1_ch1_01)
(
    root
    OR underlying
    OR indirect
)
AND (
    driver
    OR cause
)

02

Show the code
#|

cat(params$s_1_ch1_02)
equity
OR inequity
OR just
OR unjust
OR inequality
OR equality
OR Fair
OR unfair

03

Show the code
#|

cat(params$s_1_ch1_03)
scale
OR impact
OR leapfrog
OR transfer

04

Show the code
#|

cat(params$s_1_ch1_04)
inclusive
OR participation
OR participatory
OR engagement
OR democratic
OR coproduct
OR transdisc
OR multiactor
OR "multi-actor"
OR integrat

05

Show the code
#|

cat(params$s_1_ch1_05)
evaluate
OR reflex
OR reflect
OR monitor
OR adapt
OR learn

06

Show the code
#|

cat(params$s_1_ch1_06)
responsib
OR accountable
OR rights
OR steward
OR reciprocity
OR interdependent
OR interdependency
OR (
    relation
    OR relational
    OR plural
    OR diverse
    OR "sustainability-aligned"
    OR (
        care
        AND (
            value
            OR ethic
        )
    )
)

Chapter 2

Show the code
#|

cat(params$s_1_ch2)
vision
OR future
OR visionary
OR scenarios
OR imagination
OR imagery
OR creativity
OR desire
OR wish
OR visioning
OR process
OR "participaory process"
OR "deliberate process"
OR polics
OR target
OR view
OR value
OR cosmovision
OR cosmocentric
OR dream
OR fiction
OR hope
OR mission
OR objective
OR story
OR worldview
OR aspiration
OR action
OR plan
OR strategy
OR intention
OR model
OR solution
OR innovation
OR perspective
OR platform
OR collective action
OR cooperation
OR consultation
OR coalition
OR response
OR movement
OR effort
OR initiative
OR activity
OR reaction
OR performance
OR operation
OR effect
OR task
OR project
OR influence
OR moment
OR discourse
OR motivation
OR iteration
OR roadmap
OR agenda
OR project
OR programm
OR government
OR technique
OR inspiration
OR culture
OR universe
OR reality
OR fantasy
OR perception
OR visualization
OR approach
OR image
OR arquetype
OR existence
OR cosmology
OR co - production
OR knowledge
OR dialogue
OR transmission
OR conceptual
OR ceremony
OR relationships
OR respect
OR reciprocity
OR responsibilities
OR solidarity
OR harmony
OR self - determination
OR community
OR spiritual
OR languague
OR territory
OR opportunity
OR sight
OR foresight
OR idea
OR appearance

Chapter 3

01

Show the code
#|

cat(params$s_1_ch3_01)
Technology
OR Science
OR "science-society"
OR "science-technology"
OR Solution

02

Show the code
#|

cat(params$s_1_ch3_02)
"co-create"
OR "co-creation"
OR solution
OR knowledge
OR system
OR "t-lab"
OR "technology laboratory"
OR education
OR "socio-technical"

03

Show the code
#|

cat(params$s_1_ch3_03)
System
OR pathways
OR connect
OR Agroecolog
OR Institutional
OR Institution
OR Government

04

Show the code
#|

cat(params$s_1_ch3_04)
inner
OR Personal
OR Religion
OR Love
OR Loving
OR Feelings
OR Stewardship
OR Care
OR Beliefs
OR Belief
OR believe
OR Awareness
OR "Self-Awareness"

05

Show the code
#|

cat(params$s_1_ch3_05)
Worldviews
OR Grassroot
OR "Community-based"
OR Indigenous
OR Leadership
OR "Critical Science"
OR Econfeminism
OR "Political Ecology"
OR Power
OR Agency
OR Environment

06

Show the code
#|

cat(params$s_1_ch3_06)
Economic
OR "Political Economy"
OR institution
OR govern
OR economy
OR governance
OR government
OR globalization
OR states
OR colonial
OR colonialiasism
OR labour
OR organization
OR organisation

Chapter 4

01

Show the code
#|

cat(params$s_1_ch4_01)
(
    challenge
    OR barrier
    OR obstacle
    OR hinder
    OR hindrance
    OR block
    OR prevent
    OR deter
    OR inertia
    OR "path dependence"
    OR "path dependency"
    OR stasis
    OR "lock-in"
    OR trap
    OR habits
    OR habitual
    OR "status quo"
    OR power
    OR "limiting factOR"
)
AND (
    economic inequality
    OR "Wealth concentration"
    OR "Socioeconomic inequality"
    OR financialization
    OR "uneven development"
    OR Financialization
    OR "Structural adjustment"
    OR "Sovereign Debt"
    OR inequality
    OR "Policy effectiveness"
)

02

Show the code
#|

cat(params$s_1_ch4_02)
(
    challenge
    OR barrier
    OR obstacle
    OR hinder
    OR hindrance
    OR block
    OR prevent
    OR deter
    OR inertia
    OR "path dependence"
    OR "path dependency"
    OR stasis
    OR "lock-in"
    OR trap
    OR habits
    OR habitual
    OR status quo
    OR power
    OR "limiting factor"
)
AND (
    "clean technology"
    OR "clean innovation*"
    OR "sustainable innovation"
    OR "sustainable technological innovation"
)
AND (
    "limited access"
    OR "limited availability"
    OR "lack of access"
    OR "unavailability"
)

Chapter 5

Vision

Show the code
#|

cat(params$s_1_ch5_vision)

Case

Show the code
#|

cat(params$s_1_case)
"data driven" OR
observational OR
experimental OR
"real world" OR
"evidence based" OR
factual OR
quantitative OR
qualitative OR
findings OR
survey OR
fieldwork OR
documented OR
verifiable OR
practical OR
"scientifically tested" OR
"data collection" OR
"results oriented" OR
variable OR
inference OR
"empirical evidence" OR
comparative OR
replication OR
interpretative OR
behavioural OR
outcome OR
dataset OR
instance OR
case OR
sample OR
"example examination" OR
"illustrative example" OR
"subject study" OR
"specific study" OR
"prototype study" OR
"field study" OR
"exploratory study" OR
"diagnostic study" OR
"in depth study"

Vision & Case

Topics

OpenAlex assigns topics to each work in a hirarchical manner:

Please see here for more information and here for a complete list of all topics and their corresponding subfields, fields and domains.

Methods

Get and calculate Data from OpenAlex

These data is gathered from OpenAlex directly, not using the downloaded TCA Corpus. The data is used to assess the quality of the TCA Corpus.

Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "search_term_hits.rds")
if (!file.exists(fn)) {
    s_t <- grep("s_1_", names(params), value = TRUE)
    search_term_hits <- parallel::mclapply(
        s_t,
        function(stn) {
            message("getting '", stn, "' ...")
            if (grepl("_f_", stn)) {
                search <- params[[stn]]()
            } else {
                search <- params[[stn]]
            }
            search <- compact(search)
            openalexR::oa_query(filter = list(title_and_abstract.search = search)) |>
                openalexR::oa_request(count_only = TRUE, verbose = TRUE) |>
                unlist()
        },
        mc.cores = params$mc.cores,
        mc.preschedule = FALSE
    ) |>
        do.call(what = cbind) |>
        t() |>
        as.data.frame() |>
        dplyr::mutate(page = NULL, per_page = NULL) |>
        dplyr::mutate(count = formatC(count, format = "f", big.mark = ",", digits = 0))

    rownames(search_term_hits) <- s_t |>
        gsub(pattern = "s_1_", replacement = "") |>
        gsub(pattern = "f_", replacement = "") |>
        gsub(pattern = "^ch", replacement = "Chapter ") |>
        gsub(pattern = "_", replacement = " ")

    saveRDS(search_term_hits, file = fn)
} else {
    search_term_hits <- readRDS(fn)
}
Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "key_papers.rds")
if (!file.exists(fn)) {
    key_papers <- lapply(
        params$key_papers,
        function(fn) {
            message("Processing '", fn, "' ...")
            sapply(
                fn,
                function(x) {
                    read.csv(x) |>
                        select(DOI)
                }
            ) |>
                unlist()
        }
    )
    names(key_papers) <- gsub("\\.csv", "", basename(params$key_papers))

    key_papers <- list(
        Ch_1 = unlist(key_papers[grepl("Ch 1 -", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_2 = unlist(key_papers[grepl("Ch 2 -", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3_Cl_1 = unlist(key_papers[grepl("Ch 3 - Cl1", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3_Cl_3 = unlist(key_papers[grepl("Ch 3 - Cl3", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3_Cl_4 = unlist(key_papers[grepl("Ch 3 - Cl4", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3_Cl_5 = unlist(key_papers[grepl("Ch 3 - Cl5", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3_Cl_6 = unlist(key_papers[grepl("Ch 3 - Cl6", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_3 = unlist(key_papers[grepl("Ch 3 - p", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_4_Cl_1 = unlist(key_papers[grepl("Ch 4 - Challenge 1", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_4_Cl_2 = unlist(key_papers[grepl("Ch 4 - Challenge 2", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_4_Cl_3 = unlist(key_papers[grepl("Ch 4 - Challenge 3", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_4_Cl_4 = unlist(key_papers[grepl("Ch 4 - Challenge 4", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_4_Cl_5 = unlist(key_papers[grepl("Ch 4 - Challenge 5", names(key_papers))], recursive = FALSE) |> as.vector(),
        Ch_5 = unlist(key_papers[grepl("Ch 5 -", names(key_papers))], recursive = FALSE) |> as.vector()
    )

    saveRDS(key_papers, file = fn)
} else {
    key_papers <- readRDS(fn)
}
Show the code
#|

fn_kw <- file.path(".", "data", "tca_corpus", "key_works.rds")
fn_kw_df <- file.path(".", "data", "tca_corpus", "key_works_df.rds")
if (!all(file.exists(fn_kw, fn_kw_df))) {
    key_works <- parallel::mclapply(
        key_papers,
        function(kp) {
            dois <- kp[kp != ""] |>
                unlist() |>
                tolower() |>
                unique()

            openalexR::oa_fetch(doi = dois, output = "list")
        },
        mc.cores = params$mc.cores,
        mc.preschedule = FALSE
    )

    found <- sapply(
        key_works,
        function(x) {
            length(x) > 0
        }
    )

    key_works <- key_works[found]

    print("The following key paper sets were excluded as they contained no papers in OpenAlex:\n")
    print(names(found)[!found])

    saveRDS(key_works, file = fn_kw)

    key_works_df <- lapply(
        key_works,
        oa2df,
        entity = "works"
    )

    saveRDS(key_works_df, fn_kw_df)
} else {
    key_works <- readRDS(file = fn_kw)
    key_works_df <- readRDS(fn_kw_df)
}
Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "key_works_hits.rds")
if (!file.exists(fn)) {
    kws <- key_works_df
    kws$all <- key_works_df |>
        bind_rows()

    nms <- names(kws)

    key_works_hits <- pbapply::pblapply(
        nms,
        function(nm) {
            message("Getting key paper set for ", nm, " ...")
            dois <- kws[[nm]] |>
                select(doi) |>
                distinct() |>
                unlist() |>
                unique() |>
                tolower()

            s_t <- grep("s_1_", names(params), value = TRUE)
            kw_h <- parallel::mclapply(
                s_t,
                function(stn) {
                    message("  getting '", stn, "' ...")
                    if (grepl("_f_", stn)) {
                        search <- compact(params[[stn]]())
                    } else {
                        search <- compact(params[[stn]])
                    }
                    get_count(dois = dois, list(title_and_abstract.search = search), verbose = FALSE)
                },
                mc.cores = params$mc.cores,
                mc.preschedule = FALSE
            ) |>
                do.call(what = cbind) |>
                as.data.frame()
            message("Done")

            names(kw_h) <- s_t

            # if (ncol(kw_h) == 1){
            #     kw_h <- t(kw_h)
            #     rownames(kw_h) <- dois
            # }

            kw_h <- rbind(
                kw_h,
                colSums(kw_h)
            )

            rownames(kw_h)[[nrow(kw_h)]] <- "Total"
            return(kw_h)
        }
    )

    names(key_works_hits) <- nms

    for (i in nms) {
        # key_works_hits[[i]] <- cbind(
        #     key_works_hits[[i]],
        #     key_works_hits_tca_filtered[[i]]
        # )

        key_works_hits[[i]] <- cbind(
            key_works_hits[[i]],
            Total = rowSums(key_works_hits[[i]])
        ) |>
            mutate(Total = Total - 1) # |>
        # relocate(tca_corpus_SDG, .after = s_1_tca_corpus)
    }

    ###

    saveRDS(key_works_hits, file = fn)
} else {
    key_works_hits <- readRDS(file = fn)
}

Works over Time

Get works over time for different search terms

Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "oa_count.rds")
if (!file.exists(fn)) {
    oa_count <- list(
        timestamp = Sys.time()
    )
    #
    message("OpenAlex ...")
    oa_count$oa_years <- openalexR::oa_fetch(
        entity = "works",
        search = "",
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    #
    message("NATURE ...")
    oa_count$tca_nature <- openalexR::oa_fetch(
        title_and_abstract.search = compact(paste0("(", params$s_1_nature_environment, ")")),
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    #
    message("TransformatveChange ...")
    oa_count$transformative_change_years <- openalexR::oa_fetch(
        title_and_abstract.search = compact(paste0("(", params$s_1_transformative_change, ")")),
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    #
    message("TCA ...")
    oa_count$tca_years <- openalexR::oa_fetch(
        title_and_abstract.search = compact(paste0("(", params$s_1_tca_corpus, ")")),
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    #
    message("CASE ...")
    oa_count$case_years <- openalexR::oa_fetch(
        title_and_abstract.search = compact(paste0("(", params$s_1_case, ")")),
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    #
    message("TCA AND CASE ...")
    oa_count$tca_case_years <- openalexR::oa_fetch(
        title_and_abstract.search = compact(paste0("(", params$s_1_tca_corpus, ") AND (", params$s_1_case, ")")),
        group_by = "publication_year",
        output = "dataframe",
        verbose = TRUE
    ) |>
        dplyr::mutate(
            publication_year = as.integer(as.character(key_display_name)),
            key = NULL,
            key_display_name = NULL,
            p = count / sum(count)
        ) |>
        dplyr::arrange(publication_year) |>
        dplyr::mutate(
            p_cum = cumsum(p)
        ) |>
        dplyr::select(
            publication_year,
            everything()
        )
    saveRDS(oa_count, file = fn)
}

Download TCA Corpus

The corpus download will be stored in data/pages and the arrow database in data/corpus.

This is not on github!

The corpus can be read by running get_corpus() which o[pens the database so that then it can be fed into a dplyr pipeline. After most dplyr functions, the actual data needs to be collected via collect().

Only then is the actual data read!

Needs to be enabled by setting eval: true in the code block below.

Show the code
#|

tic()

IPBES.R::corpus_download(
    pages_dir = params$pages_dir,
    title_and_abstract_search = compact(params$s_1_tca_corpus),
    continue = TRUE,
    delete_pages_dir = FALSE,
    set_size = 2000,
    dry_run = FALSE,
    verbose = TRUE,
    mc_cores = 6
)

toc()
Show the code
tic()

IPBES.R::corpus_pages_to_arrow(
    pages_dir = params$pages_dir,
    arrow_dir = params$corpus_dir,
    continue = TRUE,
    delete_arrow_dir = FALSE,
    dry_run = FALSE,
    verbose = TRUE,
    mc_cores = 2
)

toc()
Show the code
#|

years <- IPBES.R::corpus_read(params$corpus_dir) |>
    distinct(publication_year) |>
    collect() |>
    unlist() |>
    as.vector() |>
    sort()

lapply(
    years,
    function(y) {
        message("\nProcessing year: ", y)
        tic()
        dataset <- IPBES.R::corpus_read(params$corpus_dir) |>
            dplyr::filter(publication_year == y) |>
            dplyr::collect() |>
            group_by(id) |>
            slice_max(
                publication_year,
                n = 1,
                with_ties = FALSE,
                na_rm = TRUE
            )
        unlink(
            file.path(params$corpus_dir, paste0("publication_year=", y)),
            recursive = TRUE,
            force = TRUE
        )
        arrow::write_dataset(
            dataset = dataset,
            path = params$corpus_dir,
            partitioning = c("publication_year", "set"),
            format = "parquet",
            existing_data_behavior = "overwrite"
        )
        toc()
    }
)

Download TCA AND CASE Corpus

Show the code
#|

tic()

IPBES.R::corpus_download(
    pages_dir = params$pages_cases_dir,
    title_and_abstract_search = compact(paste(params$s_1_tca_corpus, " AND (", params$s_1_case, ")")),
    continue = TRUE,
    delete_pages_dir = FALSE,
    set_size = 2000,
    dry_run = TRUE,
    verbose = TRUE,
    mc_cores = 6
)

toc()
Show the code
tic()

IPBES.R::corpus_pages_to_arrow(
    pages_dir = params$pages_cases_dir,
    arrow_dir = params$corpus_cases_dir,
    continue = TRUE,
    delete_arrow_dir = FALSE,
    dry_run = FALSE,
    verbose = TRUE,
    mc_cores = 2
)

toc()
Show the code
#|

years <- IPBES.R::corpus_read(params$corpus_cases_dir) |>
    distinct(publication_year) |>
    collect() |>
    unlist() |>
    as.vector() |>
    sort()

lapply(
    years,
    function(y) {
        message("\nProcessing year: ", y)
        tic()
        dataset <- IPBES.R::corpus_read(params$corpus_cases_dir) |>
            dplyr::filter(publication_year == y) |>
            dplyr::collect() |>
            group_by(id) |>
            slice_max(
                publication_year,
                n = 1,
                with_ties = FALSE,
                na_rm = TRUE
            )
        unlink(
            file.path(params$corpus_cases_dir, paste0("publication_year=", y)),
            recursive = TRUE,
            force = TRUE
        )
        arrow::write_dataset(
            dataset = dataset,
            path = params$corpus_cases_dir,
            partitioning = c("publication_year", "set"),
            format = "parquet",
            existing_data_behavior = "overwrite"
        )
        toc()
    }
)

Prepare Full Text search of Title and Abstract

This is done using duckDB and the fts extension which is a full text search extension for duckDB (see also for details and for arrow / parquet support).

The following steps are conducted:

  1. Create new duckDB called tca_corpus.duckdb
    • import data needed
    • create fts index for full text search
Show the code
if (!file.exists(params$duckdb_fn)) {
    sql <- paste0(
        "CREATE TABLE tca_corpus AS SELECT id, author_abbr, publication_year, doi, display_name, ab FROM parquet_scan('",
        file.path(".", "data", "tca_corpus", "corpus", "**", "*.parquet"),
        "')"
    )

    con <- duckdb::dbConnect(duckdb::duckdb(), dbdir = params$duckdb_fn, read_only = FALSE)
    #
    dbExecute(con, "SET autoinstall_known_extensions=1")
    dbExecute(con, "SET autoload_known_extensions=1")
    dbExecute(con, sql)
    #
    duckdb::dbDisconnect(con, shutdown = TRUE)

    con <- duckdb::dbConnect(duckdb::duckdb(), dbdir = params$duckdb_fn, read_only = FALSE)
    #
    dbExecute(con, "INSTALL fts")
    dbExecute(con, "LOAD fts")

    input_table <- "tca_corpus"
    input_id <- "id"
    input_values <- "'display_name', 'ab'"

    sql <- paste0("PRAGMA create_fts_index(", input_table, ", ", input_id, ", ", input_values, ", overwrite=1);")

    dbExecute(con, sql)
    #
    duckdb::dbDisconnect(con, shutdown = TRUE)
}

# con <- dbConnect(duckdb::duckdb(params$duckdb_fn))

# SQL <- "SELECT * FROM tca_corpus WHERE display_name MATCH 'transformative';"
# dbListTables(con)

#     input_table <- "tca_corpus"
#     input_id <- "id"
#     input_values <- "'display_name', 'ab'"

#     query_string <- "'case study'"
#     fields <- "'display_name', 'ab'"

# sql <- paste0("SELECT fts_main_tca_corpus.match_bm25(", input_id, ", ", query_string, ", fields = ", fields, " FROM tca_corpus)"

# dbExecute(con, sql)

# duckdb::dbDisconnect(con, shutdown = TRUE)

Extract Data from Global Corpus

Export Random Works from TCA Cases Corpus

Show the code
#|

sample_size <- 250

fn <- file.path("data", "tca_corpus", paste0("random_", sample_size, "_tca_cases_corpus.xlsx"))
if (!file.exists(fn)) {
    set.seed(13)
    read_corpus(params$corpus_cases_dir) |>
        dplyr::select(
            id,
            doi,
            author = author_abbr,
            title = display_name,
            abstract = ab
        ) |>
        dplyr::slice_sample(
            n = sample_size
        ) |>
        dplyr::mutate(
            abstract = substr(abstract, 1, 5000)
        ) |>
        dplyr::collect() |>
        writexl::write_xlsx(path = fn)
}

Sectors

The Sectors definition is based on the subfields assigned to each work by OpenAlex. These were grouped by experts into sectors. See this Google Doc for details.

Show the code
#|

if (!dir.exists(params$corpus_topics_dir)) {
    con <- duckdb::dbConnect(duckdb::duckdb(), read_only = FALSE)

    corpus_read(params$corpus_dir) |>
        arrow::to_duckdb(table_name = "corpus", con = con) |>
        invisible()
    corpus_read(file.path("input", "tca_corpus", "sectors_def.parquet")) |>
        arrow::to_duckdb(table_name = "sectors", con = con) |>
        invisible()

    paste0(
        "CREATE VIEW corpus_unnest AS ",
        "SELECT  ",
        "corpus.id AS work_id,  ",
        "corpus.publication_year AS publication_year,  ",
        "UNNEST(topics).i AS i,  ",
        "UNNEST(topics).score AS score,  ",
        "UNNEST(topics).name AS name, ",
        "UNNEST(topics).id AS id,  ",
        "UNNEST(topics).display_name AS display_name  ",
        "FROM  ",
        "corpus "
    ) |>
        dbExecute(conn = con)

    select_sql <- paste0(
        "SELECT ",
        "corpus_unnest.*, ",
        "sectors.sector ",
        "FROM ",
        "corpus_unnest ",
        "LEFT JOIN ",
        "sectors ",
        "ON ",
        "corpus_unnest.id  == sectors.id "
    )

    dbGetQuery(con, paste(select_sql, "LIMIT 10"))

    sql <- paste0(
        "COPY ( ",
        select_sql,
        ") TO '", params$corpus_topics_dir, "' ",
        "(FORMAT PARQUET, COMPRESSION 'SNAPPY', PARTITION_BY 'publication_year')"
    )

    dbExecute(con, sql)

    duckdb::dbDisconnect(con, shutdown = TRUE)

    ###########################

    # years <- IPBES.R::corpus_read(params$corpus_dir) |>
    #     distinct(publication_year) |>
    #     collect() |>
    #     unlist() |>
    #     as.vector() |>
    #     sort()

    # sectors <- read.csv(file.path("input", "tca_corpus", "sectors_def.csv")) |>
    #     tibble::as_tibble() |>
    #     dplyr::mutate(
    #         id = paste0("https://openalex.org/subfields/", id),
    #         display_name = NULL
    #     )

    # pbmcapply::pbmclapply(
    #     years,
    #     function(y) {
    #         message("\nProcessing year: ", y)
    #         IPBES.R::corpus_read(params$corpus_dir) |>
    #             dplyr::filter(publication_year == y) |>
    #             dplyr::select(
    #                 id,
    #                 publication_year,
    #                 topics
    #             ) |>
    #             collect() |>
    #             IPBES.R::extract_topics(
    #                 names = "subfield"
    #             ) |>
    #             dplyr::left_join(
    #                 y = sectors,
    #                 by = "id"
    #             ) |>
    #             arrow::write_dataset(
    #                 path = params$corpus_topics_dir,
    #                 partitioning = c("publication_year"),
    #                 format = "parquet",
    #                 existing_data_behavior = "overwrite"
    #             )
    #     },
    #     mc.cores = 3,
    #     mc.preschedule = FALSE
    # )
}

Authors

Show the code
#|

if (!dir.exists(params$corpus_authors_dir)) {
    con <- duckdb::dbConnect(duckdb::duckdb(), read_only = FALSE)

    corpus_read(params$corpus_dir) |>
        arrow::to_duckdb(table_name = "corpus", con = con) |>
        invisible()

    paste0(
        "CREATE VIEW corpus_unnest AS ",
        "SELECT  ",
        "corpus.id AS work_id,  ",
        "corpus.publication_year AS publication_year,  ",
        "UNNEST(author).au_id AS au_id,  ",
        "UNNEST(author).au_display_name AS au_display_name, ",
        "UNNEST(author).au_orcid AS au_orcid,  ",
        "UNNEST(author).author_position AS author_position,  ",
        "UNNEST(author).is_corresponding AS is_corresponding,  ",
        "UNNEST(author).au_affiliation_raw AS au_affiliation_raw,  ",
        "UNNEST(author).institution_id AS institution_id,  ",
        "UNNEST(author).institution_display_name AS institution_display_name,  ",
        "UNNEST(author).institution_ror AS institution_ror,  ",
        "UNNEST(author).institution_country_code AS institution_country_code,  ",
        "UNNEST(author).institution_type AS institution_type,  ",
        "UNNEST(author).institution_lineage AS institution_lineage  ",
        "FROM  ",
        "corpus "
    ) |> dbExecute(conn = con)

    paste0(
        "COPY ( ",
        "SELECT * FROM corpus_unnest ",
        ") TO '", params$corpus_authors_dir, "' ",
        "(FORMAT PARQUET, COMPRESSION 'SNAPPY', PARTITION_BY 'publication_year')"
    ) |>
        dbExecute(conn = con)

    duckdb::dbDisconnect(con, shutdown = TRUE)
}

Primary Topics

Show the code
fn <- file.path(".", "data", "tca_corpus", paste0("prim_topics_tca_corpus.rds"))
if (!file.exists(fn)) {
    prim_topics_tca_corpus <- corpus_read(params$corpus_topics_dir) |>
        dplyr::filter(
            name == "topic",
            i == 1
        ) |>
        mutate(
            id = as.integer(sub("https://openalex.org/T", "", id))
        ) |>
        dplyr::group_by(id) |>
        summarize(
            count = n()
        ) |>
        dplyr::left_join(
            read.csv(file.path("input", "tca_corpus", "OpenAlex_topic_mapping_table - final_topic_field_subfield_table.csv")),
            by = c("id" = "topic_id")
        ) |>
        dplyr::arrange(desc(count)) |>
        collect()

    saveRDS(prim_topics_tca_corpus, file = fn)
} else {
    prim_topics_tca_corpus <- readRDS(fn)
}

Figures

Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "publications_over_time_tca_corpus.rds")

if (!file.exists(fn)) {
    read_corpus(params$corpus_dir) |>
        dplyr::select(publication_year) |>
        dplyr::arrange(publication_year) |>
        dplyr::collect() |>
        table() |>
        as.data.frame() |>
        mutate(
            publication_year = as.integer(as.character(publication_year)),
            p = Freq / sum(Freq),
            p_cum = cumsum(p)
        ) |>
        rename(
            count = Freq
        ) |>
        dplyr::inner_join(
            y = openalexR::oa_fetch(
                search = "",
                group_by = "publication_year",
                output = "tibble",
                verbose = FALSE
            ) |>
                dplyr::select(
                    key,
                    count
                ) |>
                dplyr::rename(
                    publication_year = key,
                    count_oa = count
                ) |>
                dplyr::arrange(publication_year) |>
                dplyr::mutate(
                    publication_year = as.integer(as.character(publication_year)),
                    p_oa = count_oa / sum(count_oa),
                    p_oa_cum = cumsum(p_oa)
                )
        ) |>
        saveRDS(file = fn)
}
Show the code
#|

if (length(list.files(file.path("figures", "tca_corpus"), pattern = "publications_over_time")) < 2) {
    figure <- readRDS(file.path(".", "data", "tca_corpus", "publications_over_time_tca_corpus.rds")) |>
        dplyr::filter(publication_year >= 1900) |>
        ggplot() +
        geom_bar(aes(x = publication_year, y = p), stat = "identity") +
        geom_line(aes(x = publication_year, y = p_cum / 10), color = "red") +
        geom_line(aes(x = publication_year, y = p_oa_cum / 10), color = "blue") +
        scale_x_continuous(breaks = seq(1900, 2020, 10)) +
        scale_y_continuous(
            "Proportion of publications",
            sec.axis = sec_axis(~ . * 10, name = "Cumulative proportion") # divide by 100 to scale back the secondary axis
        ) +
        labs(
            title = "Publications over time",
            x = "Year",
            y = "Number of publications"
        ) +
        theme_minimal() +
        theme(axis.text.y.right = element_text(color = "red"))

    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "publications_over_time.pdf"),
        width = 12,
        height = 6,
        figure
    )
    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "publications_over_time.png"),
        width = 12,
        height = 6,
        figure
    )

    rm(figure)
}

Maps

Show the code
#|

fn <- file.path(".", "data", "tca_corpus", "countries_tca_corpus.rds")
if (!file.exists(fn)) {
    corpus <- corpus_read(params$corpus_authors_dir)

    data_first <- corpus |>
        dplyr::filter(
            author_position == "first"
        ) |>
        dplyr::select(
            work_id,
            institution_country_code,
        ) |>
        dplyr::group_by(
            work_id,
            institution_country_code
        ) |>
        dplyr::summarise(
            count_first = 1 / n(),
            .groups = "drop"
        ) |>
        dplyr::group_by(
            institution_country_code
        ) |>
        dplyr::summarise(
            count = sum(count_first),
            .groups = "drop"
        ) |>
        dplyr::mutate(
            position = "first"
        )

    data_all <- corpus |>
        dplyr::select(
            work_id,
        ) |>
        dplyr::group_by(
            work_id,
        ) |>
        dplyr::summarize(
            count = n()
        ) |>
        dplyr::right_join(
            y = corpus |>
                dplyr::select(
                    work_id,
                    institution_country_code
                ),
            by = "work_id"
        ) |>
        dplyr::group_by(
            institution_country_code
        ) |>
        dplyr::summarise(
            count = sum(count),
            .groups = "drop"
        ) |>
        dplyr::mutate(
            position = "all"
        )

    data_oa <- openalexR::oa_fetch(
        group_by = "authorships.countries",
        output = "tibble",
        verbose = FALSE
    ) |>
        dplyr::mutate(
            iso3c = countrycode::countrycode(
                key_display_name,
                origin = "country.name",
                destination = "iso3c"
            ),
            key_display_name = NULL,
            key = NULL,
            position = "oa"
        )

    data <- dplyr::add_row(
        collect(data_first),
        collect(data_all)
    ) |>
        dplyr::mutate(
            iso3c = countrycode::countrycode(
                institution_country_code,
                origin = "iso2c",
                destination = "iso3c"
            ),
            institution_country_code = NULL
        ) |>
        dplyr::add_row(
            data_oa
        ) |>
        saveRDS(file = fn)
    rm(data_first, data_all, data_oa)
}

Some check of the data

Show the code
#|

if (length(list.files(path = file.path("maps", "tca_corpus"), pattern = "publications_countries")) < 2) {
    data <- readRDS(file.path(".", "data", "tca_corpus", "countries_tca_corpus.rds")) |>
        dplyr::group_by(iso3c) |>
        dplyr::summarize(
            count_first = sum(count[position == "first"], na.rm = TRUE),
            count_all = sum(count[position == "all"], na.rm = TRUE),
            count_oa = sum(count[position == "oa"], na.rm = TRUE)
        ) |>
        dplyr::mutate(
            count_oa = ifelse(is.na(count_oa), 0, count_oa),
            log_count_oa = log(count_oa + 1),
            p_oa = count_oa / sum(count_oa),
            #
            count_first = ifelse(is.na(count_first), 0, count_first),
            log_count_first = log(count_first + 1),
            p_first = count_first / sum(count_first),
            p_first_output = count_first / count_oa,
            p_first_diff = (p_oa - p_first) * 100,
            #
            count_all = ifelse(is.na(count_all), 0, count_all),
            log_count_all = log(count_all + 1),
            p_all = count_all / sum(count_all),
            p_all_output = count_all / count_oa,
            p_all_diff = (p_oa - p_all) * 100,
        )

    # data |> mutate(
    #     count_first = count_first / max(count_first),
    #     count_all = count_all / max(count_all),
    #     count_oa = count_oa / max(count_oa)
    # ) |>
    # dplyr::arrange(desc(count_oa)) |>
    # ggplot(aes(x = iso3c)) +
    #     geom_line(aes(y = count_first, color = "Count First"), group = 1) +
    #     geom_line(aes(y = count_all, color = "Count All"), group = 1) +
    #     geom_line(aes(y = count_oa, color = "Count OA"), group = 1) +
    #     scale_color_manual(values = c("Count First" = "red", "Count All" = "blue", "Count OA" = "green")) +
    #     labs(x = "ISO3C", y = "Normalized Count") +
    #     theme_minimal()

    map <- patchwork::wrap_plots(
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of overall publications (count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of overall publications (log_count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("Overall research output (p_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_all)"),
        ####
        ggplot() +
            theme_void(),
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_all)"),
        ncol = 3
    )

    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries.pdf"),
        width = 12,
        height = 8,
        map
    )
    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries.png"),
        width = 12,
        height = 8,
        map
    )
}
Show the code
if (length(list.files(path = file.path("maps", "tca_corpus"), pattern = "publications_countries_before_2016")) < 2) {
    data <- readRDS(file.path(".", "data", "tca_corpus", "countries_tca_corpus.rds")) |>
        dplyr::filter(
            publication_year < 2016
        ) |>
        dplyr::group_by(iso3c) |>
        dplyr::summarize(
            count_first = sum(as.integer(count_first), na.rm = TRUE),
            count_all = sum(as.integer(count_all), na.rm = TRUE),
            count_oa = sum(as.integer(count_oa), na.rm = TRUE),
        ) |>
        dplyr::mutate(
            count_oa = ifelse(is.na(count_oa), 0, count_oa),
            log_count_oa = log(count_oa + 1),
            p_oa = count_oa / sum(count_oa),
            #
            count_first = ifelse(is.na(count_first), 0, count_first),
            log_count_first = log(count_first + 1),
            p_first = count_first / sum(count_first),
            p_first_output = count_first / count_oa,
            p_first_diff = (p_oa - p_first) * 100,
            #
            count_all = ifelse(is.na(count_all), 0, count_all),
            log_count_all = log(count_all + 1),
            p_all = count_all / sum(count_all),
            p_all_output = count_all / count_oa,
            p_all_diff = (p_oa - p_all) * 100,
        )

    map <- patchwork::wrap_plots(
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of overall publications (count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of overall publications (log_count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("Overall research output (p_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_all)"),
        ####
        ggplot() +
            theme_void(),
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_all)"),
        ncol = 3
    )

    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries_before_2016.pdf"),
        width = 12,
        height = 8,
        map
    )
    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries_before_2016.png"),
        width = 12,
        height = 8,
        map
    )
}
Show the code
if (length(list.files(path = file.path("maps", "tca_corpus"), pattern = "publications_countries_after_2019")) < 2) {
    data <- readRDS(file.path(".", "data", "tca_corpus", "countries_tca_corpus.rds")) |>
        dplyr::filter(
            publication_year > 2019
        ) |>
        dplyr::group_by(iso3c) |>
        dplyr::summarize(
            count_first = sum(as.integer(count_first), na.rm = TRUE),
            count_all = sum(as.integer(count_all), na.rm = TRUE),
            count_oa = sum(as.integer(count_oa), na.rm = TRUE),
        ) |>
        dplyr::mutate(
            count_oa = ifelse(is.na(count_oa), 0, count_oa),
            log_count_oa = log(count_oa + 1),
            p_oa = count_oa / sum(count_oa),
            #
            count_first = ifelse(is.na(count_first), 0, count_first),
            log_count_first = log(count_first + 1),
            p_first = count_first / sum(count_first),
            p_first_output = count_first / count_oa,
            p_first_diff = (p_oa - p_first) * 100,
            #
            count_all = ifelse(is.na(count_all), 0, count_all),
            log_count_all = log(count_all + 1),
            p_all = count_all / sum(count_all),
            p_all_output = count_all / count_oa,
            p_all_diff = (p_oa - p_all) * 100,
        )

    map <- patchwork::wrap_plots(
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of overall publications (count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("count of TCA publications (count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of overall publications (log_count_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "log_count_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("log(count + 1) of TCA publications (log_count_all)"),
        ####
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_oa",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("Overall research output (p_oa)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", high = "#56B4E9") +
            ggplot2::ggtitle("TCA research output (p_all)"),
        ####
        ggplot() +
            theme_void(),
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_first_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_first)"),
        #
        data |>
            IPBES.R::map_country_codes(
                map_type = "countries",
                values = "p_all_diff",
                geodata_path = params$gdm_dir
            ) +
            ggplot2::scale_fill_gradient2(low = "#E69F00", mid = "white", high = "#56B4E9", midpoint = 0) +
            ggplot2::ggtitle("difference (TCA - overall) output (p_oa - p_all)"),
        ncol = 3
    )

    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries_after_2019.pdf"),
        width = 12,
        height = 8,
        map
    )
    ggplot2::ggsave(
        file.path("maps", "tca_corpus", "publications_countries_after_2019.png"),
        width = 12,
        height = 8,
        map
    )
}

Topics and Sectors

Show the code
#|

fn <- file.path("data", "tca_corpus", "sectors_over_time.rds")
if (!file.exists(fn)) {
    data <- IPBES.R::corpus_read(params$corpus_topics_dir) |>
        dplyr::filter(
            name == "subfield"
        ) |>
        dplyr::group_by(
            publication_year,
            sector,
            i
        ) |>
        dplyr::summarize(
            count = n(),
            .groups = "drop"
        ) |>
        dplyr::rename(
            level = i
        ) |>
        dplyr::collect()

    data |>
        dplyr::filter(
            level == 1
        ) |>
        dplyr::group_by(
            publication_year,
            sector
        ) |>
        dplyr::summarize(
            count_1 = sum(count),
            .groups = "drop"
        ) |>
        dplyr::full_join(
            data |>
                dplyr::group_by(
                    publication_year,
                    sector
                ) |>
                dplyr::summarize(
                    count_all = sum(count)
                )
        ) |>
        dplyr::arrange(
            publication_year,
            sector
        ) |>
        dplyr::mutate(
            count_1 = ifelse(is.na(count_1), 0, count_1),
            count_all = ifelse(is.na(count_all), 0, count_all)
        ) |>
        dplyr::group_by(sector) |>
        dplyr::mutate(
            cumsum_count_1 = cumsum(count_1),
            cumsum_count_all = cumsum(count_all),
            p_cumsum_count_1 = cumsum_count_1 / max(cumsum_count_1),
            p_cumsum_count_all = cumsum_count_all / max(cumsum_count_all)
        ) |>
        saveRDS(fn)
    rm(data)
}
Show the code
#|

if (length(list.files(file.path("figures", "tca_corpus"), pattern = "sectors_over_time")) < 2) {
    figure_1 <- readRDS(file.path(file.path("data", "tca_corpus", "sectors_over_time.rds"))) |>
        dplyr::filter(
            publication_year >= 1950
        ) |>
        ggplot() +
        geom_line(
            aes(
                x = publication_year,
                y = cumsum_count_1,
                color = sector,
                lty = sector
            )
        ) +
        scale_x_continuous(breaks = seq(1900, 2020, 10)) +
        scale_y_continuous(
            "Log(No Publications)",
            trans = "log10"
            # sec.axis = sec_axis(~ . * 10, name = "Cumulative proportion") # divide by 100 to scale back the secondary axis
        ) +
        labs(
            title = "Publications classified into Sectors over time (primary sector only)",
            x = "Year"
            # y = "Number of publications"
        ) +
        theme_minimal() +
        theme(
            legend.position = "bottom",
            # axis.text.y.right = element_text(color = "red")
        )

    figure_all <- readRDS(file.path(file.path("data", "tca_corpus", "sectors_over_time.rds"))) |>
        dplyr::filter(
            publication_year >= 1950
        ) |>
        ggplot() +
        geom_line(
            aes(
                x = publication_year,
                y = cumsum_count_all,
                color = sector,
                lty = sector
            )
        ) +
        scale_x_continuous(breaks = seq(1900, 2020, 10)) +
        scale_y_continuous(
            "Log(No Publications)",
            trans = "log10"
            # sec.axis = sec_axis(~ . * 10, name = "Cumulative proportion") # divide by 100 to scale back the secondary axis
        ) +
        labs(
            title = "Publications classified into Sectors over time (up to three sectors)",
            x = "Year"
            # y = "Number of publications"
        ) +
        theme_minimal() +
        theme(
            legend.position = "none",
            # axis.text.y.right = element_text(color = "red")
        )

    figure <- patchwork::wrap_plots(
        figure_1,
        figure_all,
        nrow = 2
    )

    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "sectors_over_time.pdf"),
        width = 12,
        height = 12,
        figure
    )
    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "sectors_over_time.png"),
        width = 12,
        height = 12,
        figure
    )

    rm(figure_1, figure_all, figure)
}
Show the code
#|

if (length(list.files(file.path("figures", "tca_corpus"), pattern = "sectors_proportions_over_time")) < 2) {
    figure <- readRDS(file.path(file.path("data", "tca_corpus", "sectors_over_time.rds"))) |>
        dplyr::filter(
            publication_year >= 1950
        ) |>
        group_by(publication_year) |>
        mutate(count_all = count_all / sum(count_all)) |>
        ggplot() +
        geom_col(
            aes(
                x = publication_year,
                y = count_all,
                fill = sector
            ),
            position = "stack"
        ) +
        scale_x_continuous(breaks = seq(1900, 2020, 10)) +
        scale_y_continuous(
            "Proportion of Publications" # ,
            #    limits = c(0, 1.0001)
        ) +
        labs(
            title = "Publications classified into Sectors over time. Each publication has up to three sectors assigned.",
            x = "Year",
            y = "Proportion"
        ) +
        theme_minimal() +
        theme(
            legend.position = "right"
        )

    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "sectors_proportions_over_time.pdf"),
        width = 12,
        height = 12,
        figure
    )
    ggplot2::ggsave(
        file.path("figures", "tca_corpus", "sectors_proportions_over_time.png"),
        width = 12,
        height = 12,
        figure
    )

    rm(figure)
}

Results

Assessment of Search Terms Using OpenAlex

Number of Hits per Individual Corpus

Here we show the number of hits for the key papers in the different individual corpi. The columns represent the different search terms as defined in Section 2.2.

Show the code
dat <- cbind(
    search_term_hits
)

rownames(dat) <- dplyr::recode(
    rownames(dat),
    "transformative change" = "Transformative Change @sec-transform",
    "nature environment" = "Nature @sec-nature",
    "tca corpus" = "Assessment Corpus @sec-tca-corpus",
    "Chapter 1 01" = "Ch1 01 @sec-ch1-01",
    "Chapter 1 02" = "Ch1 02 @sec-ch1-02",
    "Chapter 1 03" = "Ch1 03 @sec-ch1-03",
    "Chapter 1 04" = "Ch1 04 @sec-ch1-04",
    "Chapter 1 05" = "Ch1 05 @sec-ch1-05",
    "Chapter 1 06" = "Ch1 06 @sec-ch1-06",
    "Chapter 2" = "Ch2  @sec-ch2",
    "Chapter 3 01" = "Ch3 01 @sec-ch3-01",
    "Chapter 3 02" = "Ch3 02 @sec-ch3-02",
    "Chapter 3 03" = "Ch3 03 @sec-ch3-03",
    "Chapter 3 04" = "Ch3 04 @sec-ch3-04",
    "Chapter 3 05" = "Ch3 05 @sec-ch3-05",
    "Chapter 3 06" = "Ch3 06 @sec-ch3-06",
    "Chapter 4 01" = "Ch4 01 @sec-ch4-01",
    "Chapter 4 02" = "Ch4 02 @sec-ch4-02",
    "Chapter 5 vision" = "Ch5 Vision @sec-ch5_vision",
    "Chapter 5 vision case" = "Ch5 Vision Case @sec-ch5_vision_case",
    "case" = "Ch5 Case @sec-case"
)

dat |>
    knitr::kable(
        caption = "Number of hits",
    )
Number of hits
count db_response_time_ms
oa 251,854,856 85
Transformative Change Section 2.2.1 18,727,697 5184
Nature Section 2.2.2 24,786,104 3390
Assessment Corpus Section 2.2.3 4,654,669 5719
Ch1 01 Section 2.2.4.1 631,344 358
Ch1 02 Section 2.2.4.2 3,247,693 511
Ch1 03 Section 2.2.4.3 16,268,481 711
Ch1 04 Section 2.2.4.4 2,527,663 833
Ch1 05 Section 2.2.4.5 26,130,315 645
Ch1 06 Section 2.2.4.6 6,545,320 673
Ch2 Section 2.2.5 110,354,370 7039
Ch3 01 Section 2.2.6.1 16,078,556 846
Ch3 02 Section 2.2.6.2 33,775,470 1419
Ch3 03 Section 2.2.6.3 28,999,876 936
Ch3 04 Section 2.2.6.4 10,855,225 891
Ch3 05 Section 2.2.6.5 13,251,850 1226
Ch3 06 Section 2.2.6.6 20,963,305 980
Ch4 01 Section 2.2.7.1 889,107 1284
Ch4 02 Section 2.2.7.2 21 757
Ch5 Case Section 2.2.8.2 54,773,142 5033
Show the code
rm(dat)

Key papers in different Individual Corpi

Show the code
#|

tbl <- lapply(
    names(key_works_hits),
    function(n) {
        kwh <- key_works_hits[[n]]
        if (nrow(kwh) > 0) {
            total <- grepl("Total", rownames(kwh))
            rownames(kwh)[!total] <- paste0(n, " - <a href='https://doi.org/", rownames(kwh)[!total], "' target='_blank'>Click here</a>")
            rownames(kwh)[total] <- paste0("**", n, " - Total**")
            kwh |>
                arrange(Total) |>
                apply(
                    c(1, 2),
                    function(x) {
                        ifelse(x == 0, "<font color='red'>0</font>", paste0("<font color='green'>", x, "</font>"))
                    }
                ) |>
                as.data.frame()
        } else {
            return(NULL)
        }
    }
)
tbl <- tbl[sapply(tbl, class) != "NULL"]
tbl <- do.call(what = rbind, tbl)


detail <- rbind(
    "**overall**" = c(
        paste0(
            "**",
            search_term_hits |>
                select(count) |>
                unlist() |>
                as.vector(),
            "**"
        ),
        ""
    ),
    tbl
)

detail <- detail |>
    dplyr::rename(
        "Transformative Change @sec-transform" = s_1_transformative_change,
        "Nature @sec-nature" = s_1_nature_environment,
        "Assessment Corpus @sec-tca-corpus" = s_1_tca_corpus,
        "Ch1 01 @sec-ch1-01" = s_1_ch1_01,
        "Ch1 02 @sec-ch1-02" = s_1_ch1_02,
        "Ch1 03 @sec-ch1-03" = s_1_ch1_03,
        "Ch1 04 @sec-ch1-04" = s_1_ch1_04,
        "Ch1 05 @sec-ch1-05" = s_1_ch1_05,
        "Ch1 06 @sec-ch1-06" = s_1_ch1_06,
        "Ch2  @sec-ch2" = s_1_ch2,
        "Ch3 01 @sec-ch3-01" = s_1_ch3_01,
        "Ch3 02 @sec-ch3-02" = s_1_ch3_02,
        "Ch3 03 @sec-ch3-03" = s_1_ch3_03,
        "Ch3 04 @sec-ch3-04" = s_1_ch3_04,
        "Ch3 05 @sec-ch3-05" = s_1_ch3_05,
        "Ch3 06 @sec-ch3-06" = s_1_ch3_06,
        "Ch4 01 @sec-ch4-01" = s_1_ch4_01,
        "Ch4 02 @sec-ch4-02" = s_1_ch4_02,
        # "Ch5 Vision @sec-ch5_vision" = s_1_ch5_vision,
        "Ch5 Case @sec-case" = s_1_case,
        # "Ch5 Vision Case @sec-ch5_vision_case" = s_1_ch5_vision_case
    )

Key Papers in Individual Corpi

Summary

Each column is a different search term, and each row consists of the key papers of a specific chapter and the author who provided the key papers. The number is the number of key papers occurring in the Individual Corpus.

Show the code
in_summary <- grepl("Total|overall", rownames(detail))
knitr::kable(
    detail[in_summary, ]
)
s_1_oa Transformative Change Section 2.2.1 Nature Section 2.2.2 Assessment Corpus Section 2.2.3 Ch1 01 Section 2.2.4.1 Ch1 02 Section 2.2.4.2 Ch1 03 Section 2.2.4.3 Ch1 04 Section 2.2.4.4 Ch1 05 Section 2.2.4.5 Ch1 06 Section 2.2.4.6 Ch2 Section 2.2.5 Ch3 01 Section 2.2.6.1 Ch3 02 Section 2.2.6.2 Ch3 03 Section 2.2.6.3 Ch3 04 Section 2.2.6.4 Ch3 05 Section 2.2.6.5 Ch3 06 Section 2.2.6.6 Ch4 01 Section 2.2.7.1 Ch4 02 Section 2.2.7.2 Ch5 Case Section 2.2.8.2 Total
overall 251,854,856 18,727,697 24,786,104 4,654,669 631,344 3,247,693 16,268,481 2,527,663 26,130,315 6,545,320 110,354,370 16,078,556 33,775,470 28,999,876 10,855,225 13,251,850 20,963,305 889,107 21 54,773,142
Ch_1 - Total 42 40 41 39 19 28 35 30 31 30 41 30 33 33 22 33 35 28 0 34 623
Ch_2 - Total 22 20 22 20 9 17 18 12 17 18 22 20 19 19 15 21 21 16 0 18 345
Ch_3_Cl_3 - Total 4 4 4 4 3 3 4 3 4 4 4 4 4 4 4 4 4 3 0 4 71
Ch_3_Cl_4 - Total 5 5 5 5 5 4 5 5 5 5 5 5 5 5 5 5 5 4 0 5 92
Ch_3_Cl_5 - Total 3 2 3 2 0 2 2 2 2 2 2 2 2 2 2 2 2 2 0 2 37
Ch_3_Cl_6 - Total 6 6 5 5 1 1 2 3 1 3 5 2 4 5 1 1 5 2 0 3 60
Ch_3 - Total 4 4 4 4 2 1 2 2 3 3 4 3 4 3 1 2 3 2 0 4 54
Ch_4_Cl_1 - Total 7 4 7 4 1 4 5 5 3 5 7 4 4 4 4 6 7 4 0 4 88
Ch_4_Cl_2 - Total 4 3 3 3 2 2 3 1 2 2 3 2 3 3 1 3 3 2 0 2 46
Ch_4_Cl_3 - Total 5 5 5 5 2 2 4 2 3 3 4 3 4 4 3 3 4 2 0 4 66
Ch_4_Cl_4 - Total 4 2 4 2 1 1 3 1 1 2 4 2 1 2 2 2 4 1 0 4 42
Ch_4_Cl_5 - Total 5 3 4 3 1 2 4 2 4 2 5 5 4 4 3 3 5 3 0 4 65
Ch_5 - Total 35 33 33 31 20 23 27 27 26 25 35 29 30 33 26 28 30 20 0 30 540
all - Total 134 120 128 116 60 82 105 87 93 95 130 102 106 110 82 104 118 81 0 108 1960

Detail

Show the code
knitr::kable(
    detail
)
s_1_oa Transformative Change Section 2.2.1 Nature Section 2.2.2 Assessment Corpus Section 2.2.3 Ch1 01 Section 2.2.4.1 Ch1 02 Section 2.2.4.2 Ch1 03 Section 2.2.4.3 Ch1 04 Section 2.2.4.4 Ch1 05 Section 2.2.4.5 Ch1 06 Section 2.2.4.6 Ch2 Section 2.2.5 Ch3 01 Section 2.2.6.1 Ch3 02 Section 2.2.6.2 Ch3 03 Section 2.2.6.3 Ch3 04 Section 2.2.6.4 Ch3 05 Section 2.2.6.5 Ch3 06 Section 2.2.6.6 Ch4 01 Section 2.2.7.1 Ch4 02 Section 2.2.7.2 Ch5 Case Section 2.2.8.2 Total
overall 251,854,856 18,727,697 24,786,104 4,654,669 631,344 3,247,693 16,268,481 2,527,663 26,130,315 6,545,320 110,354,370 16,078,556 33,775,470 28,999,876 10,855,225 13,251,850 20,963,305 889,107 21 54,773,142
Ch_1 - Click here 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 3
Ch_1 - Click here 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3
Ch_1 - Click here 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 4
Ch_1 - Click here 1 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 1 5
Ch_1 - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 6
Ch_1 - Click here 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 7
Ch_1 - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 7
Ch_1 - Click here 1 1 1 1 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 1 8
Ch_1 - Click here 1 1 1 1 0 1 1 0 0 1 1 0 0 0 0 1 0 0 0 0 8
Ch_1 - Click here 1 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 9
Ch_1 - Click here 1 1 1 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 0 1 10
Ch_1 - Click here 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 0 0 11
Ch_1 - Click here 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 11
Ch_1 - Click here 1 1 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 0 1 14
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 0 0 1 15
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 15
Ch_1 - Click here 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 0 1 15
Ch_1 - Click here 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 1 1 0 1 15
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 15
Ch_1 - Click here 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 16
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 16
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 16
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 16
Ch_1 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 0 1 16
Ch_1 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_1 - Total 42 40 41 39 19 28 35 30 31 30 41 30 33 33 22 33 35 28 0 34 623
Ch_2 - Click here 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 4
Ch_2 - Click here 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1 8
Ch_2 - Click here 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 0 0 0 9
Ch_2 - Click here 1 1 1 1 0 1 0 1 0 1 1 0 0 0 0 1 1 0 0 1 10
Ch_2 - Click here 1 1 1 1 0 1 0 0 0 1 1 1 1 1 1 1 1 0 0 0 12
Ch_2 - Click here 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 14
Ch_2 - Click here 1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 15
Ch_2 - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 15
Ch_2 - Click here 1 1 1 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 1 15
Ch_2 - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 15
Ch_2 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 15
Ch_2 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_2 - Total 22 20 22 20 9 17 18 12 17 18 22 20 19 19 15 21 21 16 0 18 345
Ch_3_Cl_3 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
Ch_3_Cl_3 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_3_Cl_3 - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_3_Cl_3 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_3 - Total 4 4 4 4 3 3 4 3 4 4 4 4 4 4 4 4 4 3 0 4 71
Ch_3_Cl_4 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
Ch_3_Cl_4 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_4 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_4 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_4 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_4 - Total 5 5 5 5 5 4 5 5 5 5 5 5 5 5 5 5 5 4 0 5 92
Ch_3_Cl_5 - Click here 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Ch_3_Cl_5 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_3_Cl_5 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_3_Cl_5 - Total 3 2 3 2 0 2 2 2 2 2 2 2 2 2 2 2 2 2 0 2 37
Ch_3_Cl_6 - Click here 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 5
Ch_3_Cl_6 - Click here 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0 6
Ch_3_Cl_6 - Click here 1 1 1 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 7
Ch_3_Cl_6 - Click here 1 1 1 1 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 1 8
Ch_3_Cl_6 - Click here 1 1 1 1 0 0 1 0 0 1 1 0 1 1 0 0 1 1 0 1 11
Ch_3_Cl_6 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_3_Cl_6 - Total 6 6 5 5 1 1 2 3 1 3 5 2 4 5 1 1 5 2 0 3 60
Ch_3 - Click here 1 1 1 1 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 1 8
Ch_3 - Click here 1 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 9
Ch_3 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_3 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
Ch_3 - Total 4 4 4 4 2 1 2 2 3 3 4 3 4 3 1 2 3 2 0 4 54
Ch_4_Cl_1 - Click here 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 4
Ch_4_Cl_1 - Click here 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 4
Ch_4_Cl_1 - Click here 1 0 1 0 0 0 1 0 0 1 1 0 0 0 0 1 1 0 0 0 6
Ch_4_Cl_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_4_Cl_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_4_Cl_1 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_4_Cl_1 - Click here 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 17
Ch_4_Cl_1 - Total 7 4 7 4 1 4 5 5 3 5 7 4 4 4 4 6 7 4 0 4 88
Ch_4_Cl_2 - Click here 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ch_4_Cl_2 - Click here 1 1 1 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 0 0 9
Ch_4_Cl_2 - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 16
Ch_4_Cl_2 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_4_Cl_2 - Total 4 3 3 3 2 2 3 1 2 2 3 2 3 3 1 3 3 2 0 2 46
Ch_4_Cl_3 - Click here 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 5
Ch_4_Cl_3 - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 7
Ch_4_Cl_3 - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 15
Ch_4_Cl_3 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_4_Cl_3 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_4_Cl_3 - Total 5 5 5 5 2 2 4 2 3 3 4 3 4 4 3 3 4 2 0 4 66
Ch_4_Cl_4 - Click here 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 6
Ch_4_Cl_4 - Click here 1 0 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 0 0 1 8
Ch_4_Cl_4 - Click here 1 1 1 1 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 9
Ch_4_Cl_4 - Click here 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
Ch_4_Cl_4 - Total 4 2 4 2 1 1 3 1 1 2 4 2 1 2 2 2 4 1 0 4 42
Ch_4_Cl_5 - Click here 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0 6
Ch_4_Cl_5 - Click here 1 0 0 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 7
Ch_4_Cl_5 - Click here 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 16
Ch_4_Cl_5 - Click here 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
Ch_4_Cl_5 - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
Ch_4_Cl_5 - Total 5 3 4 3 1 2 4 2 4 2 5 5 4 4 3 3 5 3 0 4 65
Ch_5 - Click here 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2
Ch_5 - Click here 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 3
Ch_5 - Click here 1 1 1 1 0 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 7
Ch_5 - Click here 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 0 0 0 8
Ch_5 - Click here 1 0 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 0 0 1 8
Ch_5 - Click here 1 1 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 0 0 1 8
Ch_5 - Click here 1 1 1 1 0 0 1 1 0 0 1 1 1 1 0 0 0 0 0 0 9
Ch_5 - Click here 1 1 1 1 0 0 0 1 0 0 1 0 1 1 1 1 1 0 0 1 11
Ch_5 - Click here 1 1 1 1 0 0 0 1 0 0 1 1 1 1 1 1 1 0 0 1 12
Ch_5 - Click here 1 1 1 1 0 0 0 1 0 0 1 1 1 1 1 1 1 0 0 1 12
Ch_5 - Click here 1 1 1 1 0 1 0 0 1 0 1 1 1 1 0 1 1 1 0 1 13
Ch_5 - Click here 1 1 1 1 0 1 0 1 1 0 1 1 1 1 1 1 1 0 0 1 14
Ch_5 - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 1 0 0 1 14
Ch_5 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
Ch_5 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
Ch_5 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 17
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
Ch_5 - Total 35 33 33 31 20 23 27 27 26 25 35 29 30 33 26 28 30 20 0 30 540
all - Click here 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
all - Click here 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
all - Click here 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2
all - Click here 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 3
all - Click here 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3
all - Click here 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 3
all - Click here 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 4
all - Click here 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 4
all - Click here 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 4
all - Click here 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 4
all - Click here 1 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 1 5
all - Click here 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 5
all - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 6
all - Click here 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0 6
all - Click here 1 0 1 0 0 0 1 0 0 1 1 0 0 0 0 1 1 0 0 0 6
all - Click here 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 6
all - Click here 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0 6
all - Click here 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 7
all - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 7
all - Click here 1 1 1 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 7
all - Click here 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 7
all - Click here 1 0 0 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 7
all - Click here 1 1 1 1 0 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 7
all - Click here 1 1 1 1 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 1 8
all - Click here 1 1 1 1 0 1 1 0 0 1 1 0 0 0 0 1 0 0 0 0 8
all - Click here 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1 8
all - Click here 1 1 1 1 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 1 8
all - Click here 1 0 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 0 0 1 8
all - Click here 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 0 0 0 8
all - Click here 1 1 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 0 0 1 8
all - Click here 1 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 9
all - Click here 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 0 0 0 9
all - Click here 1 1 1 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 0 0 9
all - Click here 1 1 1 1 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 9
all - Click here 1 1 1 1 0 0 1 1 0 0 1 1 1 1 0 0 0 0 0 0 9
all - Click here 1 1 1 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 0 1 10
all - Click here 1 1 1 1 0 1 0 1 0 1 1 0 0 0 0 1 1 0 0 1 10
all - Click here 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 0 0 11
all - Click here 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 11
all - Click here 1 1 1 1 0 0 1 0 0 1 1 0 1 1 0 0 1 1 0 1 11
all - Click here 1 1 1 1 0 0 0 1 0 0 1 0 1 1 1 1 1 0 0 1 11
all - Click here 1 1 1 1 0 1 0 0 0 1 1 1 1 1 1 1 1 0 0 0 12
all - Click here 1 1 1 1 0 0 0 1 0 0 1 1 1 1 1 1 1 0 0 1 12
all - Click here 1 1 1 1 0 0 0 1 0 0 1 1 1 1 1 1 1 0 0 1 12
all - Click here 1 1 1 1 0 1 0 0 1 0 1 1 1 1 0 1 1 1 0 1 13
all - Click here 1 1 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 0 1 14
all - Click here 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 14
all - Click here 1 1 1 1 0 1 0 1 1 0 1 1 1 1 1 1 1 0 0 1 14
all - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 1 0 0 1 14
all - Click here 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 0 0 1 15
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 15
all - Click here 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 0 1 15
all - Click here 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 1 1 0 1 15
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 15
all - Click here 1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 15
all - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 15
all - Click here 1 1 1 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 1 15
all - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 15
all - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 15
all - Click here 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 0 1 16
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
all - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 16
all - Click here 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 16
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 16
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 17
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Click here 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 18
all - Total 134 120 128 116 60 82 105 87 93 95 130 102 106 110 82 104 118 81 0 108 1960

TCA Corpus properties

Publications over time

The red line is the cumulative proportion of publications, the blue line the cumulative proportion of all of the Op[enAlex corpus. Both use the secondeary (red) axis.

To download high resolution, click here

Show the code
readRDS(file.path(".", "data", "tca_corpus", "publications_over_time_tca_corpus.rds")) |>
    IPBES.R::table_dt(fn = "publications_over_time")

Countries in TCA Corpus

The countries are based on the countries of the institutes of all authors, weighted by 1/no_ authors_per_paper.

The following calculations were done:

  • **count** =ifelse(is.na(count), 0, count)`
  • **log_count** =log(count + 1)`
  • **p** =count / sum(count)`
  • **count_oa** =ifelse(is.na(count_oa), 0, count)`
  • **log_count_oa** =log(count_oa + 1)`
  • **p_oa** =count_oa / sum(count_oa)`
  • **p_diff** =(p_oa - p) * 100`
  • **p_ratio** =count / count_oa`
Show the code
readRDS(file.path(".", "data", "tca_corpus", "countries_tca_corpus.rds")) |>
    IPBES.R::table_dt(fn = "publications_per_country")

All Years

To download high resolution, click here

Sectors over time

For clarity, the log of the cumulative sum of the sectors over time are shown here.

The top graph shows only the primary sector assigned, the bottom graph all sectors (first, secondary and tertiary)

To download high resolution, click here

The graph shows the proportion of the different sectors over time

To download high resolution, click here

Show the code
readRDS(file.path(".", "data", "tca_corpus", "sectors_over_time.rds")) |>
    IPBES.R::table_dt(
        fn = "sectors_over_time",
        fixedColumns = list(leftColumns = 3)
    )

Topics in corpus

Show the code
#|

cs <- cumsum(prim_topics_tca_corpus$count)
cs |>
    plot(
        type = "l",
        xlab = "Topic",
        ylab = "Cumulative Count",
        main = "Cumulative Topics in TCA Corpus"
    )

abline(
    h = 0.95 * cs[length(cs)],
    v = min(which(cs > 0.95 * cs[length(cs)])),
    col = "red"
)

text(
    x = 0.5 * length(cs),
    y = 0.95 * cs[length(cs)],
    pos = 3,
    labels = "95% of the corpus",
    col = "red"
)

Show the code
#|

prim_topics_tca_corpus |>
    relocate(count, .after = "id") |>
    IPBES.R::table_dt(
        fn = "topics_tca_corpus",
    )
Warning in instance$preRenderHook(instance): It seems your data is too big for
client-side DataTables. You may consider server-side processing:
https://rstudio.github.io/DT/server.html

SubFields in Corpus

Show the code
#|
cs <- prim_topics_tca_corpus |>
    mutate(
        id = NULL,
        topic_name = NULL,
        keywords = NULL,
        summary = NULL,
        wikipedia_url = NULL
    ) |>
    group_by(
        subfield_id,
    ) |>
    summarise(
        count = sum(count)
    ) |>
    arrange(desc(count)) |>
    dplyr::select(count) |>
    unlist() |>
    cumsum()
cs |>
    plot(
        type = "l",
        xlab = "Subfield",
        ylab = "Cumulative Count",
        main = "Cumulative Subfields in TCA Corpus"
    )

abline(
    h = 0.95 * cs[length(cs)],
    v = min(which(cs > 0.95 * cs[length(cs)])),
    col = "red"
)

text(
    x = 0.5 * length(cs),
    y = 0.95 * cs[length(cs)],
    pos = 3,
    labels = "95% of the corpus",
    col = "red"
)

Show the code
#|

prim_topics_tca_corpus |>
    mutate(
        topic_id = NULL,
        topic_name = NULL,
        keywords = NULL,
        summary = NULL,
        wikipedia_url = NULL
    ) |>
    group_by(
        subfield_id,
        subfield_name,
        field_id,
        field_name,
        domain_id,
        domain_name
    ) |>
    summarise(
        count = sum(count),
        .groups = "drop"
    ) |>
    arrange(desc(count)) |>
    relocate(count, .after = "subfield_id") |>
    DT::datatable(
        extensions = c(
            "Buttons",
            "FixedColumns",
            "Scroller"
        ),
        options = list(
            dom = "Bfrtip",
            buttons = list(
                list(
                    extend = "csv",
                    filename = fn
                ),
                list(
                    extend = "excel",
                    filename = fn
                ),
                list(
                    extend = "pdf",
                    filename = fn,
                    orientation = "landscape",
                    customize = DT::JS(
                        "function(doc) {",
                        "  doc.defaultStyle.fontSize = 5;", # Change the font size
                        "}"
                    )
                ),
                "print"
            ),
            scroller = TRUE,
            scrollY = JS("window.innerHeight * 0.7 + 'px'"),
            scrollX = TRUE,
            fixedColumns = list(leftColumns = 4)
        ),
        escape = FALSE
    )

==== APPENDIX ====

Assesment of Sectors / Contcepts search terms

The file strategies_options.txt contains the terms for the different strategies and options.

I will now iterate through all of them and identify the number of hits per individual search term. This can be used as a result in itself in interpreting the importance of each term as well as to shorten the search term to be able to use it together with the TCA search term.

Methods

Prepare Search Terms

Show the code
fn <- file.path("data", "tca_corpus", "strategies_options_terms.rds")
if (!file.exists(fn)) {
    # Read the lines from the file
    sts <- readLines("input/tca_corpus/search terms/strategies_options.txt")

    # Remove empty or NA lines or Main header ("# ")
    sts <- sts[!is.na(sts) & nchar(sts) > 0]
    sts <- grep("^# ", sts, invert = TRUE, value = TRUE)

    # Create a vector that indicates where each new first-level element should start
    split_points_level1 <- cumsum(grepl("^## \\(", sts))

    # Split the vector into a list
    list_sts_level1 <- split(sts, split_points_level1)

    # For each first-level element, split into second-level elements
    list_sts_level2 <- lapply(list_sts_level1, function(x) {
        split_points_level2 <- cumsum(grepl("^###", x))
        split(x, split_points_level2)
    })

    # Remove the lines with "## (" and "###" from each element of the list
    strategies_options_terms <- lapply(
        list_sts_level2,
        function(x) {
            lapply(
                x,
                function(y) {
                    res <- y[!grepl("^## \\(|^###", y)]
                    if (length(res) == 0) {
                        res <- NULL
                    }
                    return(res)
                }
            )
        }
    )

    # Remove empty elements from the second-level lists
    strategies_options_terms <- lapply(
        strategies_options_terms,
        function(sts) {
            i <- sapply(
                sts,
                length
            ) > 0
            return(sts[i])
        }
    )

    # Extract the names for the first-level list
    names_level1 <- gsub("^## \\(|\\)$", "", sts[grepl("^## \\(", sts)])

    # Extract the names for the second-level lists
    names_level2 <- lapply(list_sts_level1, function(x) gsub("^### |\\)$", "", x[grepl("^###", x)]))
    names(names_level2) <- names_level1

    # Assign the names to the first and second level list
    names(strategies_options_terms) <- names_level1
    strategies_options_terms <- Map(setNames, strategies_options_terms, names_level2)

    saveRDS(strategies_options_terms, file = fn)
} else {
    strategies_options_terms <- readRDS(fn)
}

First Run the search terms

Show the code
fn <- file.path("data", "tca_corpus", "strategies_options.rds")
if (!file.exists(file.path(fn))) {
    strategies_options_terms <- readRDS(file.path("data", "tca_corpus", "strategies_options_terms.rds"))
    strategies_options <- lapply(
        names(strategies_options_terms),
        function(sec) {
            message("- ", sec)
            lapply(
                names(strategies_options_terms[[sec]]),
                function(conc) {
                    message("    |- ", conc)
                    result <- list()
                    result$term <- paste(strategies_options_terms[[sec]][[conc]], collapse = " ")
                    result$count <- NA
                    result$shortened <- TRUE
                    rel_excluded <- NA
                    result$years <- NA
                    result$assess_search_terms <- NA
                    try(
                        {
                            result$count <- openalexR::oa_fetch(
                                title_and_abstract.search = IPBES.R::compact(paste0("(", params$s_1_tca_corpus, ") AND (", result$term, ")")),
                                count_only = TRUE,
                                output = "list",
                                verbose = FALSE
                            )$count
                            result$shortened <- FALSE
                            result$assess_search_terms <- assess_search_term(
                                st = strategies_options_terms[[sec]][[conc]],
                                remove = " OR$",
                                excl_others = FALSE,
                                mc.cores = params$mc.cores
                            ) |>
                                dplyr::arrange(desc(count))
                            result$years <- openalexR::oa_fetch(
                                title_and_abstract.search = IPBES.R::compact(paste0("(", params$s_1_tca_corpus, ") AND (", result$term, ")")),
                                group_by = "publication_year",
                                output = "dataframe",
                                verbose = FALSE
                            ) |>
                                dplyr::select(
                                    publication_year = key_display_name,
                                    count
                                ) |>
                                dplyr::arrange(
                                    publication_year
                                )
                        },
                        silent = TRUE
                    )
                    if (result$shortened) {
                        x <- reduce_search_term_length(
                            search_term = result$term,
                            AND_term = params$s_1_tca_corpus,
                            verbose = FALSE
                        )
                        result$term <- x$search_term
                        result$count <- x$final_count
                        result$shortened <- TRUE
                        result$rel_excluded <- x$rel_excluded
                        result$years <- openalexR::oa_fetch(
                            title_and_abstract.search = IPBES.R::compact(paste0("(", params$s_1_tca_corpus, ") AND (", result$term, ")")),
                            group_by = "publication_year",
                            output = "dataframe",
                            verbose = FALSE,
                            progress = TRUE
                        )
                        result$assess_search_terms <- x$assessment
                    }
                    return(result)
                }
            )
        }
    )

    # Assign the names to the first and second level list
    names(strategies_options) <- names(strategies_options_terms)
    for (sec in names(strategies_options)) {
        names(strategies_options[[sec]]) <- names(strategies_options_terms[[sec]])
    }

    saveRDS(strategies_options, file = fn)
} else {
    strategies_options <- readRDS(fn)
}

Results

Count of Concepts

Show the code
#|

strategies_options <- readRDS(file.path("data", "tca_corpus", "strategies_options.rds"))

data <- lapply(
    names(strategies_options),
    function(str) {
        lapply(
            names(strategies_options[[str]]),
            function(con) {
                return(
                    data.frame(
                        Strategy = str,
                        Concept = con,
                        Count = strategies_options[[str]][[con]]$count,
                        Count_until_1992 = sum(strategies_options[[str]][[con]]$years$count[strategies_options[[str]][[con]]$years$publication_year <= 1992]),
                        Count_after_1992 = sum(strategies_options[[str]][[con]]$years$count[strategies_options[[str]][[con]]$years$publication_year > 1992])
                    )
                )
            }
        ) |>
            do.call(what = rbind)
    }
) |>
    do.call(what = rbind)

data |>
    IPBES.R::table_dt(
        fn = "strategies_options_counts",
        fixedColumns = list(leftColumns = 2)
    )

Plot of the Count of the Concepts split at 1992

This data is corrected for ifferent research oputput before and after 1992 by dividing by the overall reasarch output in thap perio as reflected on OpenAlex.

Show the code
#|

oa <- openalexR::oa_fetch(
    search = "",
    group_by = "publication_year",
    output = "dataframe",
    verbose = FALSE
)
Show the code
oa_until_1992 <- sum(oa$count[oa$key < 1992])
oa_after_1992 <- sum(oa$count[oa$key > 1992])

data <- data |>
    dplyr::mutate(
        Strategy = paste0(Strategy, " |||| ", Concept),
        Count_until_1992 = Count_until_1992 / oa_until_1992,
        Count_after_1992 = Count_after_1992 / oa_after_1992,
    ) |>
    dplyr::group_by(Strategy) |>
    dplyr::mutate(
        Count_until_1992_p = Count_until_1992 / sum(Count_until_1992 + Count_after_1992),
        Count_after_1992_p = Count_after_1992 / sum(Count_until_1992 + Count_after_1992)
    )


figure <- data |>
    tidyr::pivot_longer(
        cols = c(Count_until_1992, Count_after_1992),
        names_to = "Period",
        values_to = "Count_year"
    ) |>
    # Create the plot
    ggplot(aes(x = Strategy, y = Count_year, fill = Period)) +
    geom_bar(stat = "identity") +
    theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1)) +
    labs(x = "Strategy", y = "Count", fill = "Period")


ggplot2::ggsave(
    file.path("figures", "tca_corpus", "strategies_options_time_split.pdf"),
    width = 12,
    height = 18,
    figure
)
ggplot2::ggsave(
    file.path("figures", "tca_corpus", "strategies_options_time_split.png"),
    width = 12,
    height = 18,
    figure
)

figure <- data |>
    tidyr::pivot_longer(
        cols = c(Count_until_1992_p, Count_after_1992_p),
        names_to = "Period",
        values_to = "Count_p_year"
    ) |>
    # Create the plot
    ggplot(aes(x = Strategy, y = Count_p_year, fill = Period)) +
    geom_bar(stat = "identity") +
    theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1)) +
    labs(x = "Strategy", y = "Count", fill = "Period")


ggplot2::ggsave(
    file.path("figures", "tca_corpus", "strategies_options_time_split_p.pdf"),
    width = 12,
    height = 18,
    figure
)
ggplot2::ggsave(
    file.path("figures", "tca_corpus", "strategies_options_time_split_p.png"),
    width = 12,
    height = 18,
    figure
)

The graph shows the by overall research output corrected publications before and after 1992 for each Strategy ||| Concept

To download high resolution, click here

The second graph shows the proportion of the by overall research output corrected publications before and after 1992 for each Strategy ||| Concept

To download high resolution, click here

Assesment of individual terms

These numbers are the number of hits of TCA Corpus AND each individual term of the Concept

Show the code
strategies_options <- readRDS(file.path("data", "tca_corpus", "strategies_options.rds"))

lapply(
    names(strategies_options),
    function(str) {
        # cat("\n\n### ", str, "\n")
        lapply(
            names(strategies_options[[str]]),
            function(con) {
                # cat("\n#### ", con, "\n")
                strategies_options[[str]][[con]]$assess_search_terms |>
                    knitr::kable(
                        caption = paste0(str, " -- ", con)
                    )
            }
        )
    }
)

[[1]] [[1]][[1]]

1) Conserve restore and regenerate places of value to nature and people. – Conservation and restoration options
term count
Restoration 486789
MPA 340712
“Protected area” 60332
“Marine protected area” 11804
“Remedial action” 10707
“Forest conservation” 8081
“Co-management” 6378
“Co-management” 6378
“Marine reserve” 3962
“Marine park” 3140
“sacred site” 2473
ICCA 2454
Rewilding 1768
“Community-based management” 1622
“sacred grove” 1419
“Ecosystem-based approach” 1190
OECM 278
“Community protocol” 268
“Biocultural conservation” 189
“Other Effective area-based Conservation Measure” 158
“Transboundary protected area” 152
“Land sovereignty” 92
“Community-led conservation” 41
“Convivial conservation” 39
“Indigenous and Community Conserved Area” 38
“High sea conservation” 27
“High seas conservation” 27
“Indigenous-led conservation” 27

[[1]][[2]]

1) Conserve restore and regenerate places of value to nature and people. – Conservation finance
term count
“Public-private partnership” 35905
“Tourism revenue” 1547
“Conservation funding” 662
“Environmental finance” 349
“Debt-for-nature swap” 213
“Conservation finance” 182
“Biodiversity finance” 72
“Ecological finance” 33
“Conservation trust fund” 26
“Nature finance” 26
“Conservation philanthropy” 16
“Direct funding to community” 6
“Public funding for conservation” 6
“Resource user fee” 0
“Simplified funding application” 0

[[1]][[3]]

1) Conserve restore and regenerate places of value to nature and people. – Conservation regulation
term count
“Environmental Law” 24730
“Environmental Impact Assessment” 19524
“Legal pluralism” 4664
“Official development assistance” 4183
“Land Use Regulation” 2718
“Zoning Regulation” 1071
“Habitat Conservation Plan” 396
“Waste Management Regulation” 350
“Environmental public interest litigation” 217
“National biodiversity strategy and action plan” 152
“Seasonal restriction” 111
NBSAP 105
“Resource Management Law” 94
“Invasive Species Regulation” 40
“Logging regulation” 36
“Agri-Environmental and Climate Measure” 9

[[1]][[4]]

1) Conserve restore and regenerate places of value to nature and people. – Management
term count
Management 7162827
“Sustainable use” 28150
“Coastal management” 7000
“Integrated coastal zone management” 1677
“Ocean governance” 1372
“Transboundary water management” 416
“Marine governance” 384
“Coastal governance” 323
“Integrated landscape management” 181
“Sustainable wildlife management” 92
“Invasive alien species management” 49
“Marine and coastal governance” 15
“Coastal waters management” 13
“Landscape governance network” 0
“Land and marine resource management” 0
“Shared and integrated ocean governance” 0

[[1]][[5]]

1) Conserve restore and regenerate places of value to nature and people. – Monitoring
term count
Monitoring 3758149
“remote sensing” 298058
“environmental impact assessment” 19524
“citizen science” 16036
“forest monitoring” 2140
“ocean monitoring” 1108
“species monitoring” 1084
“marine monitoring” 1054
“coastal monitoring” 1014
“fish monitoring” 626
“marine mammal monitoring” 145
“plankton monitoring” 98
“open ocean monitoring” 11
“citizen science observation programme” 0

[[1]][[6]]

1) Conserve restore and regenerate places of value to nature and people. – Spatial planning
term count
“Spatial planning” 24741
“Land-use planning” 23468
“Environmental Impact assessment” 19524
“Master Plan” 18709
“Buffer zone” 10940
“Infrastructure Planning” 5419
“Strategic Environmental Assessment” 2942
“Marine spatial planning” 2394
“Zoning Regulation” 1071
“Maritime spatial planning” 586
“Land Use Permit” 176
“Participatory Planning Approach” 132
“Multi-functional landscape” 108
“Development Control Regulation” 63
“GIS and Spatial Analysis Tool” 26
“Cross-Boundary Coordination Mechanism” 0

[[1]][[7]]

1) Conserve restore and regenerate places of value to nature and people. – Right-based approaches
term count
“Human right” 21698
“Intellectual property right” 2674
“right to water” 2018
“Access and benefit sharing” 1256
“community rights” 1027
“Free prior and informed consent” 587
FPIC 553
UNDRIP 456
“Right of nature” 318
“communal rights” 274
“International Labour Organization Convention” 91
“Universal Declaration of Human Right” 50
“International Human Right Treaty” 20
“International Covenant on Civil and Political Right” 17
“Indigenous and local language” 9
“International Covenant on Economic Social and Cultural Right” 5
“United Nations Declaration on the Right of Indigenous Peoples” 5
“IPLC ownership” 0
0

[[2]] [[2]][[1]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Certification and standards
term count
Standards 5516366
Labeling 1520403
Guidelines 1279270
Certification 270546
“ISO Standards” 8802
“Organic Certification” 1186
“green branding” 981
“LEED Certification” 692
Ecolabeling 253
“Fair Trade Certification” 180
“UTZ Certified” 99
“B-Corp Certification” 75
“Forest Stewardship Council Certification” 57
“Marine Stewardship Council Certification” 47
“Rainforest Alliance Certification” 39
“Non-GMO Project Verification” 10
“Aquaculture stewardship council certification” 0

[[2]][[2]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Community-driven initiatives
term count
“Local reuse 12421
“Farmers’ Market” 4656
“Community Garden” 3477
fix-up initiative” 957
“Bottom-up initiative” 917
“Urban gardening” 803
“Community goal” 766
“Local action group” 687
“Beach cleanup” 103
“Citizen-led initiative” 83
“Transition Town Movement” 67
“Community-based Renewable Energy Project” 22
“Zero Waste Community” 19
“Community-Led Conservation Initiative” 9
“Local Food Cooperative” 5
“Community-Led Sustainable Transportation Initiative” 0
“Local Currency and Exchange System” 0
“Neighborhood Repair and Reuse Program” 0
“Street and Neighborhood Cleanup Campaign” 0

[[2]][[3]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Green infrastructure
term count
“Public transport” 50707
“Constructed wetland” 16549
“Green infrastructure” 9845
Biofilter 8825
“Urban park” 8279
“Urban forest” 8023
“Green roof” 7304
“Water infrastructure” 5905
“Energy efficient building” 4943
“Sustainable infrastructure” 2757
“Riparian buffer” 2461
“green logistics” 1939
“Permeable pavement” 1786
“Green wall” 1611
“Rain garden” 1240
“Ecological restoration project” 1031
“Green street” 586
“Sustainable drainage system” 576
“Floodplain restoration” 376
“Living shoreline” 337
Bioswale 335
“Vegetated swale” 192
“Access to urban service” 110
“Multi-purpose structure” 44
“Urban agriculture space” 29
“Blue-green corridor” 10
“Articulated density in city” 0
“Food storage and delivery system” 0

[[2]][[4]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Green technology
term count
“Wind turbine” 115980
Biofuel 82852
Biomimetic 63133
microgrid 53079
“Solar panel” 27125
“Geothermal energy” 16482
“Smart meter” 14680
“Hybrid vehicle” 10321
“local currency” 3995
“Solar photovoltaic system” 3620
“Climate-smart agriculture” 2998
“Green building material” 920
“Fuel-efficient vehicle” 665
Minigrid 435
“Biomass energy production” 359
“Coordinated transport” 253
“energy sharing platform” 11
“Coordinated heating” 6
“Replacement fertilizer” 6
“Small renewable energy technology” 1
“Coordinated electrification” 0
“High Carbon Stock landscape” 0
“Microbial plant bio-fertilizer” 0
“Microbial plant bio-stimulant” 0
“Microbial plant regulator” 0
“Microbial plant biocide” 0

[[2]][[5]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Market-based mechanisms
term count
“Corporate Social Responsibility” 79749
“Environmental Tax” 4142
“carbon credit” 3875
“income transfer” 2509
“Socially responsible investment” 2357
“Reducing Emission from Deforestation and Forest Degradation” 1891
“Derivative trading” 1098
“Cap and Trade System” 1071
“Tradable permit” 1017
“Subsidy Reform” 1015
“Biodiversity offset” 753
“Trade ban” 457
“Compensation for environmental damage” 182
“Commodity future” 176
“cap and share” 78
“Mitigation for environmental damage” 78
“Ecological fiscal transfer” 51
“Restoration for environmental damage” 20
“Reform environmental-harmful subsidy” 11
“Market-based financing mechanism” 10
“Biodiversity compensation policy” 7
“True cost pricing” 5
“Commodity chain regulation” 3

[[2]][[6]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Market regulation
term count
“Compliance” 562577
Enforcement 486034
“Corporate social responsibility” 79749
Quota 59921
“public procurement” 15630
“Regulatory measure” 6191
“Licensing and Permitting” 4742
“Pollution Control Measure” 1358
“Limit on pollution” 429
“Sustainable public procurement” 394
“Wildlife trade regulation” 33
“Regulation of Resource Extraction” 6
“Cap resource consumption” 1
“converson off-budget subsidies” 0
“Food market transparency” 0
“Legislative control over pesticide use” 0
“sanction to biodiversity damage” 0
“Unsustainable use of biodiversity sanction” 0

[[2]][[7]]

2) Drive systemic change in the sectors most responsible for biodiversity loss. – Sustainable production
term count
Compost 80018
Agroforestry 24400
“Crop rotation” 23530
“Sustainable production” 18991
“Organic agriculture” 9894
“fair trade” 9882
“Sustainable design” 9417
Agroecology 6800
“Conservation tillage” 6475
“Crop diversification” 3770
“Climate-smart agriculture” 2998
“Sustainable fishing” 816
“Regenerative agriculture” 633
“Reduced impact logging” 577
“Biological agriculture” 531
“Carbon farming” 511
“Swidden agriculture” 479
“Responsible production” 473
“Sustainable agricultural intensification” 422
“Sustainable fishing practice” 154
“Regenerative farming” 108
“Best practice in production” 72
“Holistic planned grazing” 24

[[3]] [[3]][[1]]

3) Transform economic systems to address power inequities and extractivist activities – Alternative business models
term count
“Open Source” 210679
“Fair Trade” 9882
“Collaborative Consumption” 1235
“Cradle-to-Cradle” 1220
“Community-supported Agriculture” 1184
“Benefit Corporation” 916
“Alternative business model” 507
“B-Corp” 282
“B-Corporation” 264
“Platform Cooperative” 146
“Subscription-based Model” 118
“Employee-owned Business” 48

[[3]][[2]]

3) Transform economic systems to address power inequities and extractivist activities – Alternative economic models
term count
Commoning 5683388
“Circular economy” 39284
“Sharing economy” 10165
Bioeconomy 7695
Degrowth 2343
“circular bioeconomy” 1684
“Caring economics” 915
“Ecosystem accounting” 511
“Steady state economy” 331
“Natural capital accounting” 318
“Mainstreaming biodiversity” 210
“Alternative economic model” 178
“downscale production” 10
“Ecosystem services provisioning scheme” 0

[[3]][[3]]

3) Transform economic systems to address power inequities and extractivist activities – Environmental governance
term count
“Food sovereignty” 3839
“Guidelines for Securing Sustainable Small-Scale Fisheries” 95
“Guidelines on the Responsible Governance of Tenure of Land Fisheries and Forests” 43
“Land and water stewardship” 7

[[3]][[4]]

3) Transform economic systems to address power inequities and extractivist activities – Supply chain governance and transparency
term count
Standard 5516363
Labeling 1520403
Certification 270546
Guideline 246785
“ISO Standard” 8802
“Third-party auditing” 492
“Third-party verification” 415
“Public procurement policy” 341
“Mandatory reporting requirement” 193
“Whistleblower protection law” 64
“Consumer demand for transparency” 9
“Collaborative supply chain initiative” 8
“Corporate disclosure mandate” 2
“relocalize economy” 2
“Supplier code of conduct requirement” 1
“Blockchain technology for supply chain tracking” 0
“Multi-stakeholder partnership for transparency” 0
“Supply chain traceability regulation” 0
“Traffic light nutrient labeling” 0

[[3]][[5]]

3) Transform economic systems to address power inequities and extractivist activities – Sustainable consumption
term count
reuse 490442
recycle 425989
“Sustainable consumption” 7531
“reduce consumption” 7476
“plant-based diet” 4251
“shared ownership” 3613
“Responsible consumption” 2106
“green consumption” 1993
“Collaborative Consumption” 1235
“Tax on consumption” 943
“Food waste reduction” 860
“Dietary transition” 674
“Normative feedback” 509
“Behavioral nudge” 298
“shared consumption” 286
“Localized food system” 81
“frugal consumption” 26
“Sustainable sourcing practice” 20
“Fee on consumption” 14
“low-impact diet” 5
“Campaign on consumer good” 0
“ban on planned obsolescence” 0
“Overbuying discouragement” 0
“relocalize consumption” 0

[[3]][[6]]

3) Transform economic systems to address power inequities and extractivist activities – Sustainability and well-being measures
term count
“Human Development Index” 12584
“Ecological Footprint” 9566
“Living Standards Survey” 1603
“Gross National Happiness” 779
“World Happiness Report” 252
“Genuine Progress Indicator” 234
“Social Progress Index” 225
“Happy Planet Index” 177
“Index of Sustainable Economic Welfare” 147
“Inclusive Wealth Index” 37
“wellbeing budget” 32
“Doughnut planning” 1

[[4]] [[4]][[1]]

4) Transform governance systems to be inclusive accountable and adaptive. – Anti-corruption measures
term count
“Convention against Corruption” 821
“Address corruption” 479
“measures against corruption” 95
“Whistleblower protection law” 64
“Disempowering corruption” 0
“Disempowering elitism” 0
“Disempowering lobbyism” 0
“Dismantle vested interests” 0

[[4]][[2]]

4) Transform governance systems to be inclusive accountable and adaptive. – Customary governance
term count
“Customary law” 11395
“Indigenous education” 2431
“Community cooperation” 928
“Customary institution” 710
“Customary norm” 667
“Customary tenure” 501
“Resource stewardship” 423
“Indigenous and local knowledge” 383
“Indigenous jurisdiction” 118
“Intergenerational knowledge transmission” 25
“IPLC governance” 1
“Customary access right” 0
“IPLC led code of conduct” 0
“IPLC livelihood and economy” 0

[[4]][[3]]

4) Transform governance systems to be inclusive accountable and adaptive. – Engagement and participation
term count
Participation 1125253
“Co-Design” 26993
“Co-Creation” 23237
“Stakeholder engagement” 13111
“Community-Based Participatory Research” 6991
“Community-based natural resource management” 975
“Community-based monitoring” 578
“Citizen Science Initiative” 572
“Multi-stakeholder platform” 331
“Online Platform and Social Media” 86
“Global action network” 38
“Capacity Building and Training Program” 27
“Deliberative Democracy Process” 20
“Community Meeting and Workshop” 14
“Participatory Mapping and GIS” 14
“Public Consultation and Hearing” 8
“Participatory evaluation and learning” 7
“Participation in international process” 6

[[4]][[4]]

4) Transform governance systems to be inclusive accountable and adaptive. – Governance and democratic processes
term count
“Policy monitoring” 1023
“Conflict resolution mechanism” 999
“Capacity building initiative” 953
“Citizen assembly” 595
“Adaptive management framework” 384
“Local governance structure” 273
“Open government initiative” 270
“Participatory decision-making process” 233
“Deep democracy” 232
“Access to information law” 186
“Transparency and accountability mechanism” 55
“Policy co-design” 24
“Bottom-up governance approach” 17
“Policy co-creation” 17
“Gender-responsive governance” 12
“Deliberative democracy mechanism” 8
“Gender inclusive governance” 8
“Accountable evaluation and learning” 0
“Culturally appropriate governance model” 0

[[4]][[5]]

4) Transform governance systems to be inclusive accountable and adaptive. – International agreements
term count
CITES 2830719
CMS 81639
“Sustainable Development Goal” 65746
SDG 47896
“International agreement” 23241
“Paris Agreement” 12865
“Kyoto Protocol” 11289
“United Nations Framework Convention on Climate Change” 5027
“Convention on Biological Diversity” 4915
“Agenda 21” 4347
“Bilateral agreement” 4225
“Montreal Protocol” 2969
“MARPOL” 1630
“Stockholm Convention” 1579
“Multilateral environmental agreement” 1175
“Global trade system” 1115
“Ramsar Convention” 1107
“Nagoya Protocol” 1100
“Convention on International Trade in Endangered Species of Wild Fauna and Flora” 932
“Basel Convention” 720
“Minamata Convention” 707
“Rio Declaration” 699
“Cartagena Protocol” 648
“EU Green Deal” 387
“United Nations Convention to Combat Desertification” 363

[[4]][[6]]

4) Transform governance systems to be inclusive accountable and adaptive. – Legislative reforms
term count
“Institutional reform” 13133
“Judicial independence” 4319
“Anti-discrimination law” 2156
“Institutional entrepreneurship” 1261
“Freedom of information law” 405
“Ombudsman institution” 334
“Decentralization law” 181
“Equitable access to justice” 28
“Public participation law” 16
“Legal empowerment of marginalized group” 1
“Environmental justice mechanism” 0
“Institutional setting reform” 0

[[5]] [[5]][[1]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Behavioral change
term count
“Behavioral change” 85365
“Social norm” 42571
Nudging 21093
“Consumption reduction” 5737
“Peer-to-peer communication” 1581
“Choice architecture” 1415
“Food waste reduction” 860
“Dietary transition” 674
“Normative feedback” 509
“Behavioral nudge” 298
“Campaign on consumer goods” 2
“Overbuying discouragement” 0

[[5]][[2]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Capacity building
term count
“Capacity building” 50519
“Awareness campaign” 12687
“Capacity development” 7154
“Practical learning” 3515
“Cultural revitalization” 622
“Cultural exchange program” 210
“Inner capacity” 142
“Inner development goal” 9
“Indigenous Peoples’ rights training and capacity building” 0
“Leadership and capacity for action” 0

[[5]][[3]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Cultural change
term count
“Education program” 143407
“Social media platform” 36637
“Artistic expression” 9599
“Cultural transformation” 7765
“Cultural event” 6716
“Cultural narrative” 3326
“Mass media campaign” 1888
“Community dialogue” 742
“Regenerative culture” 213
“Youth empowerment program” 105
“Civic engagement initiative” 69
“Storytelling initiative” 55

[[5]][[4]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Education
term count
Curriculum 617065
Imagination 172223
“K-12” 48813
“Environmental education” 28750
“Experiential learning” 28169
“Social learning” 27086
“Solution space” 18234
“Adult learning” 12408
“Transformative learning” 7876
“Transformational learning” 1724
“Experiential teaching” 908
“Transformation lab” 45
“Imagination infrastructure” 2

[[5]][[5]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Holistic approaches to sustainability
term count
“Community health” 85805
“One health” 16694
“Cultural pluralism” 3436
“Alternative vision” 3272
“Planetary health” 3034
“Harmony with nature” 2535
“Holistic management” 2075
“Water-energy-food nexus” 1363
“Rights of nature” 1304
“Indigenous worldview” 834
Ecohealth 639
“Respect for cultural diversity” 298
“Holistic worldview” 286
“Living in balance” 188
“Unitive education” 187
“Eco-centrism” 120
“Unitive vision” 47
“Unitive narrative” 30
“Interconnectedness of all life forms” 9
“Natural social contract” 9
“Recognition of the intrinsic value of biodiversity” 1
“Localism and decentralization of governance” 0
“Recognition of indigenous wisdom and knowledge” 0

[[5]][[6]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Network and collaboration
term count
Facilitator 2315027
Connector 661114
“Social network” 279650
Multiplier 274457
Intermediary 98335
“Learning network” 18788
“Change agent” 11924
“Knowledge network” 7599
“Community network” 6227
“Policy network” 6171
“Technology network” 3911
“Collaborative initiative” 2581
“Advocacy network” 1584
“Boundary spanner” 1467
“Partnership network” 762
“Boundary organization” 474
“Multi-stakeholder platform” 331
“Innovation broker” 133
“Respectful partnership” 126
“Transition intermediary” 47
“Middle actor” 42
“Indigenous peoples’ network” 5
“New collaborative setting” 2

[[5]][[7]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Knowledge co-creation
term count
“Local knowledge” 17748
“Indigenous knowledge” 16938
“Knowledge system” 16028
“Knowledge co-production” 776
“Knowledge co-creation” 515
“Co-creation of knowledge” 484
“Indigenous and local knowledge” 383
“Art-science collaboration” 207
“Collaborative knowledge production” 170
“Participatory research and development” 100
“Weaving knowledge” 72
“Breeding knowledge” 67
“Collaborative research and learning” 42
“Multiple evidence-based approach” 22
“Collective knowledge generation” 19
“Jointly constructed knowledge” 12
“Joint knowledge development” 10
“Participatory knowledge creation” 8
“Knowledge co-design” 7
“Co-creative inquiry” 5

[[5]][[8]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Psychological change
term count
Oneness 20017068
Connection 1441885
“Paradigm shift” 73112
Mindset 55925
“Support group” 30286
“Belief system” 17014
“Developmental psychology” 11404
Epiphany 7868
“Personal meaning” 4682
“Personal transformation” 2056
Biophilia 1456
“Identity shift” 975
“Inner transformation” 385
“Transformative learning spaces” 66

[[5]][[9]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Spiritual and cultural practices
term count
Presence 5932823
Connectedness 1167367
Ritual 182140
Wisdom 161399
Festival 115121
Spirituality 85040
Mindfulness 80072
Ceremony 60773
HNC 12199
Psychedelic 7876
“Environmental stewardship” 4167
“Cultural preservation” 2481
“Connection to nature” 2378
“Interfaith dialogue” 1796
“Plant medicine” 1100
“Spiritual connection” 993
“Community solidarity” 873
“Cultural revitalization” 622
“Prayer and meditation” 467
“Community bond” 293
“Human-nature connection” 128
“Human-nature connection” 128
“Sacred teaching” 98
“Human-nature relation” 60
“Interfaith collaboration” 44
“Ethical framework and value” 13
“Continuity with ancestors” 10

[[5]][[10]]

5) Promote narratives and norms of unity in diversity in support of global sustainability goals. – Values for sustainability
term count
Values 11363885
Justice 672775
Integrity 594436
Equity 382580
Resilience 318638
Transparency 272374
relationality 256130
Adaptability 239745
Empathy 122015
Thriving 106126
Interdependence 83681
Reciprocity 60594
Compassion 55977
Sufficiency 55269
Stewardship 54915
“Systems thinking” 16708
Holism 14716
“Good life” 13193
“Precautionary principle” 6719
“Respect for Diversity” 1616
“Balanced relationship” 1039
“relational values” 990
“Caring for nature” 954

Counts over Years for each TCA Corpus AND Concept

Show the code
#|

strategies_options <- readRDS(file.path("data", "tca_corpus", "strategies_options.rds"))

lapply(
    names(strategies_options),
    function(str) {
        # cat("\n\n### ", str, "\n")
        lapply(
            names(strategies_options[[str]]),
            function(con) {
                # cat("\n#### ", con, "\n")
                res <- data.frame(
                    Strategy = str,
                    Concept = con,
                    Year = strategies_options[[str]][[con]]$years,
                    Count = strategies_options[[str]][[con]]$count
                )

                # strategies_options[[str]][[con]]$years |>
                #     knitr::kable(
                #         caption = paste0(str, " -- ", con)
                #     )
            }
        ) |>
            do.call(what = rbind)
    }
) |>
    do.call(what = rbind) |>
    IPBES.R::table_dt(
        fn = "strategies_options_counts_per_year",
        fixedColumns = list(leftColumns = 2)
    )

Differences between Initial and Final Search Terms

The differences are shown as a

CASE

The differences are between the original search term and the used search term for CASES.

Show the code
diffviewer::visual_diff(
    file.path("input", "tca_corpus", "search terms", "case.org.txt"),
    file.path("input", "tca_corpus", "search terms", "case.txt")
)

Strategies / Concepts

The differences are between the original search term and the used search term for STRATEGIES_CONCEPTS.

Show the code
diffviewer::visual_diff(
    file.path("input", "tca_corpus", "search terms", "strategies_options.org.txt"),
    file.path("input", "tca_corpus", "search terms", "strategies_options.txt")
)

Reuse

Citation

BibTeX citation:
@report{krug2024,
  author = {Krug, Rainer M.},
  title = {Data {Management} {Report} {Transformative} {Change}
    {Assessment} {Corpus} - {SOD}},
  date = {2024-04-19},
  doi = {10.5281/zenodo.10251349},
  langid = {en},
  abstract = {The literature search for the assessment corpus was
    conducted using search terms provided by the experts and refined in
    co-operation with the Knowldge and Data task force. The search was
    conducted using {[}OpenAlex{]}(https://openalex.org), scripted from
    {[}R{]}(https://cran.r-project.org) to use the
    {[}API{]}(https://docs.openalex.org). Search terms for the following
    searches were defined: **Transformative Change**, **Nature /
    Environment** and **additional search terms for individual chapters
    and sub-chapters** To assess the quality of the corpus, sets of
    key-papers were selected by the experts to verify if these are in
    the corpus. These key-papers were selected per chapter / sub-chapter
    to ensure that the corpus is representative of each chapter.}
}
For attribution, please cite this work as:
Krug, Rainer M. 2024. “Data Management Report Transformative Change Assessment Corpus - SOD.” Report Transformative Change Assessment Corpus. https://doi.org/10.5281/zenodo.10251349.